CN106778522A - A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation - Google Patents

A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation Download PDF

Info

Publication number
CN106778522A
CN106778522A CN201611059543.6A CN201611059543A CN106778522A CN 106778522 A CN106778522 A CN 106778522A CN 201611059543 A CN201611059543 A CN 201611059543A CN 106778522 A CN106778522 A CN 106778522A
Authority
CN
China
Prior art keywords
gabor
matrix
feature
transformation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611059543.6A
Other languages
Chinese (zh)
Other versions
CN106778522B (en
Inventor
葛洪伟
李莉
江明
朱嘉钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huiyouba Technology Co ltd
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201611059543.6A priority Critical patent/CN106778522B/en
Publication of CN106778522A publication Critical patent/CN106778522A/en
Application granted granted Critical
Publication of CN106778522B publication Critical patent/CN106778522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于Gabor特征提取和空间变换的单样本人脸识别方法,主要解决在只有单张训练样本图像的条件下,由于类内散布矩阵是零而导致的传统面部识别方法不能够应用的问题。该方法采用Gabor小波从原始的单样本图像中提取空间特征向量,然后融合所提取的空间特征向量和原始的光谱特征向量,利用特征空间变换方法对融合特征矩阵进行低维特征空间变换,将其变换到一个低维子空间中,最后,利用最近邻分类器完成识别。本发明方法能够准确地完成单样本人脸的识别,提高了识别精度、降低了计算的代价。与现有技术相比,本发明提出的人脸识别方法更具有效性和鲁棒性。

The invention discloses a single-sample face recognition method based on Gabor feature extraction and space transformation, which mainly solves the inability of the traditional face recognition method due to the fact that the intra-class scatter matrix is zero under the condition of only a single training sample image. application problem. This method uses Gabor wavelet to extract the spatial feature vector from the original single-sample image, then fuses the extracted spatial feature vector and the original spectral feature vector, uses the feature space transformation method to perform low-dimensional feature space transformation on the fusion feature matrix, and converts it to Transform into a low-dimensional subspace, and finally, use the nearest neighbor classifier to complete the identification. The method of the invention can accurately complete the recognition of a single-sample human face, improves the recognition precision and reduces the calculation cost. Compared with the prior art, the face recognition method proposed by the present invention is more effective and robust.

Description

一种基于Gabor特征提取和空间变换的单样本人脸识别方法A single-sample face recognition method based on Gabor feature extraction and spatial transformation

技术领域technical field

本发明属于模式识别和图像处理领域,涉及到针对单张面部图像,由于类内散布矩阵为零,传统人脸识别方法无法使用情形下的人脸识别问题;具体地说是一种基于Gabor特征提取和空间变换的单样本人脸识别方法,可用于单样本情境下的视频监控、身份识别等。The invention belongs to the field of pattern recognition and image processing, and relates to the face recognition problem under the situation that the traditional face recognition method cannot be used for a single face image because the scatter matrix in the class is zero; specifically, it is a Gabor feature-based The single-sample face recognition method of extraction and spatial transformation can be used for video surveillance and identity recognition in single-sample situations.

背景技术Background technique

人脸识别技术是生物特征识别技术中最重要的一种,目前在视频监控、督查执法、多媒体、过程控制、身份识别等领域都有广泛的应用。到目前为止,许多研究者在这方面的研究已经取得了很多科研成果。但是,针对恶劣或者特定的环境下,往往会对人脸识别系统提出一个全新的挑战,例如,执法人员仅有犯罪分子身份证上的一张面部图像,此时只能通过这张图像进行监控对比。对于这种只有一张面部图像的情景,人脸识别问题将会变得非常困难,这主要是因为常用的分类模型中的类内散布矩阵是零,Fisher线性判别分析、最大散度差等传统方法都将无法直接使用。这种情形我们通常称为无约束环境下的单训练样本人脸识别问题。如何在较差的光照、面部姿态变化较大、表情变化较大的条件下,利用监控捕捉到犯罪分子的单张全新面部,完成准确的自动识别是一个很大的挑战。目前针对单训练样本的人脸识别问题,仍没有得到很好的解决。Face recognition technology is the most important type of biometric recognition technology. It is currently widely used in video surveillance, supervision and law enforcement, multimedia, process control, identification and other fields. So far, many researchers have achieved a lot of scientific research results in this area. However, for harsh or specific environments, it often poses a new challenge to the face recognition system. For example, law enforcement officers only have a facial image on the criminal's ID card, and at this time they can only use this image for monitoring Compared. For this scenario where there is only one face image, the face recognition problem will become very difficult, mainly because the intra-class scatter matrix in the commonly used classification model is zero, Fisher linear discriminant analysis, maximum divergence difference and other traditional methods will not be used directly. This situation is usually called the single training sample face recognition problem in an unconstrained environment. How to use surveillance to capture a single new face of a criminal under the conditions of poor lighting, large changes in facial posture, and large changes in expression is a big challenge to complete accurate automatic recognition. At present, the problem of face recognition for single training samples has not been well solved.

近年来,国内外一些研究人员针对单样本训练图像做了一些研究。Gao等人对单样本训练图像提出了基于奇异值分解(SVD)的Fisher线性判别分析法(Gao Q,Zhang L,ZhangD.Face recognition using FLDA with single training image per person[J].Applied Mathematics and Computation,2008,205(2):726-734.),该方法首先是对单张图像进行SVD分解,得到一组基图像,然后再用原始图像和这组基图像构造每一类新的训练图像。此时,就可以得到FLDA模型中的类内散布矩阵。Koc和Barkana等人提出了基于列旋转正交三角分解(QRCP)的方法(Koc M,Barkana A.A new solution to one sample problemin face recognition using FLDA[J].Applied Mathematics and Computation,2011,217(24):10368-10376.),该方法利用列旋转正交三角分解(QRCP)对单训练样本图像进行处理,同样也得到一组基图像,然后利用所得的基图像来构建近似图像(该近似图像包含原始图像97%的能量),最后利用得到的近似图像和原始图像组成每一类新的训练图像,从而获得FLDA模型中的类内散布矩阵。Li等人则提出了另一种得到类内散布矩阵的方法(Li L,Gao J,Ge H.A new face recognition method via semi-discrete decomposition forone sample problem[J].Optik-International Journal for Light and ElectronOptics,2016,127(19):7408-7417.),该方法利用半离散分解(SDD)来代替SVD或者QRCP,利用人为设置分解能量参数获取近似图像,同样得到FLDA模型中的类内散布矩阵。In recent years, some researchers at home and abroad have done some research on single-sample training images. Gao et al. proposed a Fisher linear discriminant analysis method based on singular value decomposition (SVD) for single-sample training images (Gao Q, Zhang L, ZhangD.Face recognition using FLDA with single training image per person[J].Applied Mathematics and Computation ,2008,205(2):726-734.), the method first decomposes a single image by SVD to obtain a set of base images, and then uses the original image and this set of base images to construct each new training image . At this point, the intra-class scatter matrix in the FLDA model can be obtained. Koc and Barkana et al proposed a method based on column rotation orthogonal triangular decomposition (QRCP) (Koc M, Barkana A.A new solution to one sample problemin face recognition using FLDA[J].Applied Mathematics and Computation,2011,217(24) :10368-10376.), this method uses the column rotation orthogonal triangular decomposition (QRCP) to process the single training sample image, and also obtains a set of base images, and then uses the obtained base images to construct an approximate image (the approximate image contains 97% of the energy of the original image), and finally use the obtained approximate image and the original image to form a new training image for each class, so as to obtain the intra-class scatter matrix in the FLDA model. Li et al. proposed another method to obtain the intra-class scatter matrix (Li L, Gao J, Ge H.A new face recognition method via semi-discrete decomposition for one sample problem[J]. Optik-International Journal for Light and Electron Optics, 2016,127(19):7408-7417.), this method uses semi-discrete decomposition (SDD) instead of SVD or QRCP, uses artificially set decomposition energy parameters to obtain approximate images, and also obtains the intra-class scatter matrix in the FLDA model.

虽然基于SVD和基于QRCP的Fisher线性判别分析方法都能解决单训练样本的人脸识别问题,但是这两种方法也有以下三个方面的缺点:(1)重构的近似图像不能令人十分满意或者信服;(2)在基于QRCP的方法中,没有给出理论的分析和解释近似图像至少要包含原图像97%的能量;在基于SVD的方法中,当近似图像的数目大于4时,基图像和原始图像只存在一个微小的差异。(3)基于SVD和QRCP的方法中,忽略了对于大尺度图像的分解和存储。Although both SVD-based and QRCP-based Fisher linear discriminant analysis methods can solve the face recognition problem of a single training sample, these two methods also have the following three shortcomings: (1) The reconstructed approximate image is not very satisfactory Or convinced; (2) In the method based on QRCP, no theoretical analysis and explanation is given, and the approximate image must contain at least 97% of the energy of the original image; in the method based on SVD, when the number of approximate images is greater than 4, the basic There is only one slight difference between the image and the original. (3) In the methods based on SVD and QRCP, the decomposition and storage of large-scale images are ignored.

在识别率和识别时间上,基于SDD的方法要优于基于SVD和QRCP的方法,且所需的存储空间少,但是基于SDD的方法仍然存在三个主要的缺陷:(1)该方法的停止准则需要人为控制;(2)基于SDD的方法仍然使用Fisher判别准则,即:该方法仍然利用类内散布矩阵和类间散布矩阵来获得有效的判别信息。In terms of recognition rate and recognition time, the method based on SDD is better than the method based on SVD and QRCP, and requires less storage space, but there are still three main defects in the method based on SDD: (1) the method stops The criterion needs human control; (2) The method based on SDD still uses Fisher's discriminant criterion, that is: the method still uses the intra-class scatter matrix and the inter-class scatter matrix to obtain effective discriminant information.

发明内容Contents of the invention

针对上述存在的问题,本发明提出一种基于Gabor特征提取和空间变换的单样本人脸识别方法,以解决单样本图像情景下的类内散布矩阵是零的问题,该方法不仅可以大大减少用于人脸识别的特征维数,还能保证一定的识别性能。从而提高了人脸识别在实际应用中的准确性和鲁棒性。In view of the above-mentioned problems, the present invention proposes a single-sample face recognition method based on Gabor feature extraction and spatial transformation to solve the problem that the intra-class scatter matrix is zero in the single-sample image scenario. Due to the feature dimension of face recognition, it can also guarantee a certain recognition performance. Therefore, the accuracy and robustness of face recognition in practical applications are improved.

实现本发明的关键技术是:在单张训练样本图像的情形下,首先利用Gabor小波从原始的单张训练样本图像中提取空间特征向量;然后利用所提取的空间特征向量和原始的光谱特征向量得到融合特征矩阵;再利用特征空间变换方法对融合特征矩阵进行低维特征空间变换,将融合特征矩阵变换到一个低维子空间中。最后利用最近邻分类器进行识别。基于Gabor特征提取和空间变换的单样本人脸识别方法不仅大大的提高了识别率,降低了计算的复杂度,并且与现有技术相比,此方法更具有效性和鲁棒性。The key technique for realizing the present invention is: in the situation of a single training sample image, at first utilize Gabor wavelet to extract the spatial feature vector from the original single training sample image; then use the extracted spatial feature vector and the original spectral feature vector The fused feature matrix is obtained; and then the fused feature matrix is transformed into a low-dimensional feature space by using the feature space transformation method, and the fused feature matrix is transformed into a low-dimensional subspace. Finally, the nearest neighbor classifier is used for identification. The single-sample face recognition method based on Gabor feature extraction and spatial transformation not only greatly improves the recognition rate, but also reduces the computational complexity. Compared with the existing technology, this method is more effective and robust.

为实现上述目标,具体实现步骤如下:To achieve the above goals, the specific steps are as follows:

(1)利用Gabor小波提取单张图像的空间信息。(1) Gabor wavelet is used to extract the spatial information of a single image.

(1.1)构造Gabor滤波函数:本发明采用一个二维Gabor滤波来提取单张图像的空间信息,它是由一个复杂的正弦平面来调整的高斯核函数。被定义为:(1.1) Construct Gabor filter function: the present invention adopts a two-dimensional Gabor filter to extract the spatial information of a single image, which is a Gaussian kernel function adjusted by a complex sine plane. is defined as:

其中,f是复杂正弦平面波的中心角频率,θ代表Gabor函数的正常平行条纹方向,φ是相位,σ是标准偏差,γ是用于指定Gabor函数支持椭率的空间比率。where f is the central angular frequency of the complex sinusoidal plane wave, θ represents the normal parallel stripe direction of the Gabor function, φ is the phase, σ is the standard deviation, and γ is the spatial ratio used to specify the ellipticity supported by the Gabor function.

(1.2)构造Gabor滤波器组:由于Gabor滤波器组是由一组不同频率和方向的Gabor滤波器组成的,本发明中,我们使用具有五个不同尺度和八个不同方向的Gabor滤波器组,下述两式给出了一个具有五个不同尺度和八个不同方向的Gabor滤波器组:(1.2) Construct Gabor filter bank: Since the Gabor filter bank is made up of a group of Gabor filters with different frequencies and directions, in the present invention, we use the Gabor filter bank with five different scales and eight different directions , the following two formulas give a Gabor filter bank with five different scales and eight different directions:

(1.3)面部图像的Gabor表示:对于一张面部图像A(x,y),其Gabor表示可以通过原图像的卷积和Gabor滤波来获得,即:(1.3) Gabor representation of facial image: For a facial image A(x,y), its Gabor representation can be obtained by convolution and Gabor filtering of the original image, namely:

其中,G(x,y)代表Gabor滤波在不同尺度u和不同方向v的二维卷积结果。并且G(x,y)的大小由下采样因子ξ来决定,并且对其实行归零均值和单位方差,得到一个滤波特征矩阵Zu,v∈Rm*nAmong them, G(x,y) represents the two-dimensional convolution result of Gabor filtering at different scales u and different directions v. And the size of G(x,y) is determined by the downsampling factor ξ, and the mean value and unit variance are zeroed to obtain a filter feature matrix Z u,v ∈R m*n .

(1.4)构造Gabor方向块特征矩阵:将(1.3)中得到的滤波特征矩阵Zu,v转换为一维列向量,用Z0表示面部图像A(x,y)的五个不同尺度和八个不同方向的Gabor方向块特征矩阵如下:(1.4) Construct the Gabor direction block feature matrix: convert the filter feature matrix Z u, v obtained in (1.3) into a one-dimensional column vector, and use Z 0 to represent five different scales and eight The feature matrix of the Gabor direction block in different directions is as follows:

其中,是Zu,v在尺度i下的一维表示,Z0∈R(m *n)×40是从卷积结果G(x,y)中获得的Gabor方向块特征矩阵。in, is the one-dimensional representation of Z u,v at scale i, and Z 0 ∈ R (m *n)×40 is the Gabor direction block feature matrix obtained from the convolution result G(x,y).

2、融合Gabor小波提取的空间信息和原始图像本身的光谱信息。2. Fusion of spatial information extracted by Gabor wavelet and spectral information of the original image itself.

由步骤1得到Gabor方向块特征矩阵为Z0∈R(m*n)×40,另一方面,由于单张训练样本图像本身包含非常重要的光谱信息,因此,将Gabor空间特征向量和光谱特征向量Y0∈R(m *n)×41进行融合,可以得到融合特征矩阵F∈R(m*n)×41The Gabor direction block feature matrix obtained from step 1 is Z 0 ∈ R (m*n)×40 . On the other hand, since a single training sample image itself contains very important spectral information, the Gabor space feature vector and spectral feature The vector Y 0 ∈ R (m *n)×41 is fused, and the fusion feature matrix F∈R (m*n)×41 can be obtained:

其中,σ1和σ2分别是Z0和Y0的标准偏差,它可以通过计算特征向量方差的平方根来得到。where σ1 and σ2 are the standard deviations of Z0 and Y0, respectively, which can be obtained by calculating the square root of the variance of the eigenvectors.

3、基于融合特征矩阵的特征空间变换。3. Feature space transformation based on fusion feature matrix.

(3.1)建立融合特征优化模型 (3.1) Establish fusion feature optimization model

(3.2)进行特征空间变换得到变换矩阵 (3.2) Perform feature space transformation to obtain transformation matrix

4、构造投影特征向量 4. Construct the projected feature vector

对于测试特征向量f∈Rn×1,通过如下线性变换,可以得到投影特征向量 For the test feature vector f∈R n×1 , the projected feature vector can be obtained by the following linear transformation

很显然,上述的计算复杂度明显的降低了。Obviously, the above computational complexity is significantly reduced.

5、在得到投影特征向量之后,用最近邻分类器进行识别。5. After obtaining the projected feature vector, use the nearest neighbor classifier for identification.

本发明方法具有以下优点:The inventive method has the following advantages:

(1)克服了单训练样本遇到的难题:由于单训练样本模型中的类内散布矩阵是零,因此传统Fisher准则失效,而本发明方法通过Gabor滤波和特征空间变换来重构类内散布矩阵。(1) overcome the difficult problem that single training sample encounters: because the intraclass scatter matrix in the single training sample model is zero, so traditional Fisher's criterion fails, and the method of the present invention reconstructs intraclass scatter through Gabor filtering and feature space transformation matrix.

(2)本发明能够充分利用原始图像的空间信息和它本身的光谱信息。同时,基于Gabor的空间特征信息比图像本身的光谱特征信息更具有鲁棒性,并且能够避免表情、姿态和光照等变化引起的局部扭曲。大大的提高了识别率和识别时间,降低了计算的代价。与现有技术相比,本发明提出的人脸识别方法更具有效性和鲁棒性。(2) The present invention can make full use of the spatial information of the original image and its own spectral information. At the same time, Gabor-based spatial feature information is more robust than the spectral feature information of the image itself, and can avoid local distortions caused by changes in expression, pose, and illumination. The recognition rate and recognition time are greatly improved, and the calculation cost is reduced. Compared with the prior art, the face recognition method proposed by the present invention is more effective and robust.

附图说明Description of drawings

图1本发明方法的流程图The flowchart of Fig. 1 method of the present invention

图2 Gabor滤波在5个不同尺度和8个不同方向上的实部Figure 2 The real part of Gabor filtering at 5 different scales and 8 different directions

图3单张面部图像的2个Gabor滤波的卷积结果Figure 3 Convolution results of 2 Gabor filters for a single face image

图4来自每一个数据集的5张不同的面部图像Figure 4. Five different face images from each dataset

图5四种方法在不同投影下的识别率(ORL人脸数据库)Figure 5 The recognition rate of the four methods under different projections (ORL face database)

图6四种方法在不同投影下的识别率(Yale人脸数据库)Figure 6 The recognition rate of the four methods under different projections (Yale face database)

图7四种方法在不同投影下的识别率(FERET人脸数据库)Figure 7 The recognition rate of the four methods under different projections (FERET face database)

具体实施方式detailed description

本发明是一种基于Gabor特征提取和空间变换的单样本人脸识别方法。参照图1,本发明的具体实施步骤包括以下。The invention is a single-sample face recognition method based on Gabor feature extraction and space transformation. Referring to Fig. 1, the specific implementation steps of the present invention include the following.

步骤1.利用Gabor小波提取单张图像的空间信息。Step 1. Use Gabor wavelet to extract the spatial information of a single image.

(1.1)构造Gabor滤波函数:本发明采用一个二维Gabor滤波来提取单张图像的空间信息,它是由一个复杂的正弦平面来调整的高斯核函数。被定义为:(1.1) Construct Gabor filter function: the present invention adopts a two-dimensional Gabor filter to extract the spatial information of a single image, which is a Gaussian kernel function adjusted by a complex sine plane. is defined as:

其中,f是复杂正弦平面波的中心角频率,θ代表Gabor函数的正常平行条纹方向,φ是相位,σ是标准偏差,γ是用于指定Gabor函数支持椭率的空间比率。where f is the central angular frequency of the complex sinusoidal plane wave, θ represents the normal parallel stripe direction of the Gabor function, φ is the phase, σ is the standard deviation, and γ is the spatial ratio used to specify the ellipticity supported by the Gabor function.

(1.2)构造Gabor滤波器组:由于Gabor滤波器组是由一组不同频率和方向的Gabor滤波器组成的,本发明中,我们使用具有五个不同尺度和八个不同方向的Gabor滤波器组,下述两式给出了一个具有五个不同尺度和八个不同方向的Gabor滤波器组:(1.2) Construct Gabor filter bank: Since the Gabor filter bank is made up of a group of Gabor filters with different frequencies and directions, in the present invention, we use the Gabor filter bank with five different scales and eight different directions , the following two formulas give a Gabor filter bank with five different scales and eight different directions:

(1.3)面部图像的Gabor表示:对于一张面部图像A(x,y),其Gabor表示可以通过原图像的卷积和Gabor滤波来获得,即:(1.3) Gabor representation of facial image: For a facial image A(x,y), its Gabor representation can be obtained by convolution and Gabor filtering of the original image, namely:

其中,G(x,y)代表Gabor滤波在不同尺度u和不同方向v的二维卷积结果。并且G(x,y)的大小由下采样因子ξ来决定,并且对其实行归零均值和单位方差,得到一个滤波特征矩阵Zu,v∈Rm*nAmong them, G(x,y) represents the two-dimensional convolution result of Gabor filtering at different scales u and different directions v. And the size of G(x,y) is determined by the downsampling factor ξ, and the mean value and unit variance are zeroed to obtain a filter feature matrix Z u,v ∈R m*n .

(1.4)构造Gabor方向块特征矩阵:将(1.3)中得到的滤波特征矩阵Zu,v转换为一维列向量,用Z0表示面部图像A(x,y)的五个不同尺度和八个不同方向的Gabor方向块特征矩阵如下:(1.4) Construct the Gabor direction block feature matrix: convert the filter feature matrix Z u, v obtained in (1.3) into a one-dimensional column vector, and use Z 0 to represent five different scales and eight The feature matrix of the Gabor direction block in different directions is as follows:

其中,是Zu,v在尺度i下的一维表示,Z0∈R(m *n)×40是从卷积结果G(x,y)中获得的Gabor方向块特征矩阵。in, is the one-dimensional representation of Z u,v at scale i, and Z 0 ∈ R (m *n)×40 is the Gabor direction block feature matrix obtained from the convolution result G(x,y).

步骤2.融合Gabor特征提取的空间特征信息和原始图像的光谱信息。Step 2. Fusion of spatial feature information extracted from Gabor features and spectral information of the original image.

由于单张样本图像本身包含非常重要的光谱信息,因此,将Gabor空间特征向量和光谱特征向量Y0∈R(m*n)×41进行融合,得到融合特征矩阵F∈R(m*n)×41Since the single sample image itself contains very important spectral information, the Gabor space feature vector and spectral feature vector Y 0 ∈ R (m*n)×41 are fused to obtain the fusion feature matrix F ∈ R (m*n) ×41 :

其中,σ1和σ2分别是Z0和Y0的标准偏差,它可以通过计算特征向量方差的平方根来得到。where σ1 and σ2 are the standard deviations of Z0 and Y0, respectively, which can be obtained by calculating the square root of the variance of the eigenvectors.

步骤3.对融合特征矩阵F进行特征空间变换。Step 3. Perform feature space transformation on the fusion feature matrix F.

(3.1)建立融合特征优化模型 (3.1) Establish fusion feature optimization model

为了能够区分所有训练图像中的每一类,希望来自同一类的差异尽可能的小,相反,来自不同类样本图像之间的差异尽可能大。受Fisher准则思想的启发,我们建立了如下融合特征优化模型:In order to be able to distinguish each class in all training images, it is desirable that the differences from the same class are as small as possible, and conversely, the differences between sample images from different classes are as large as possible. Inspired by the idea of Fisher's criterion, we established the following fusion feature optimization model:

我们的目标就是利用特征空间变换来投影融合特征矩阵到一个低维的特征子空间中,并找到能够使得类间分离性最大的一个最优线性变换阵。Our goal is to use the feature space transformation to project the fusion feature matrix into a low-dimensional feature subspace, and find an optimal linear transformation matrix that can maximize the separation between classes.

(3.2)进行特征空间变换得到变换矩阵W2(3.2) Perform feature space transformation to obtain transformation matrix W 2 .

(3.2a)Gabor类间散布矩阵和Gabor类内散布矩阵的构建。(3.2a) Construction of Gabor inter-class scatter matrix and Gabor intra-class scatter matrix.

假设从融合特征矩阵中得到n维训练样本,c是类的个数,ni(i=1,2,…,c)是第i类训练样本的个数,于是Gabor类间散布矩阵和Gabor类内散布矩阵分别定义如下:Assuming that n-dimensional training samples are obtained from the fusion feature matrix, c is the number of classes, and n i (i=1,2,...,c) is the number of training samples of the i-th class, so Gabor inter-class scatter matrix and Gabor The intra-class scatter matrix is defined as follows:

其中,是来自第i类的第j个融合特征向量,fi是第i类的均值向量,f0是所有训练样本的均值向量。in, is the j-th fused feature vector from the i-th class, f i is the mean vector of the i-th class, and f 0 is the mean vector of all training samples.

我们同时最大化和最小化就能得到变换矩阵W。其优化模型如下:We simultaneously maximize and minimize The transformation matrix W can be obtained. Its optimization model is as follows:

但是当上述公式中的矩阵和矩阵奇异时,上述准则是不成立的。在这种情况下,矩阵分解以及特征空间变换通常起着重要的作用。在特征空间变换中,由于我们的目标是最大化不同类别的差异性,因此,应该丢弃的零空间,这是因为它包含没有用的信息;同时应该保留的零空间的重要信息。But when the matrix in the above formula and matrix When singular, the above criterion does not hold. In this case, matrix factorization as well as feature space transformation usually play an important role. In the feature space transformation, since our goal is to maximize the difference between different categories, we should discard , because it contains useless information; while the Important information about the null space.

(3.2b)投影Gabor类间散布矩阵和Gabor类内散布矩阵到s1维子空间并且得到变换W1(3.2b) Project Gabor inter-class scatter matrix and Gabor intra-class scatter matrix to s 1 -dimensional subspace and obtain transformation W 1 .

在此步中,首先考虑的奇异值分解(SVD)In this step, first consider the The singular value decomposition (SVD) of

将Ub分块,其中因此,Block U b into blocks, in therefore,

从而,得到如下形式的公式:Thus, a formula of the following form is obtained:

其中,是列正交矩阵,是一个对角元素非增且正的对角矩阵。在实际应用中,矩阵的奇异性会导致判别能力的降低。因此,其零特征值以及对应的特征向量应该被丢弃。基于上述考虑,首先,利用变换将原始的数据变换到s1维的空间中。于是就得到了变换 in, is a column-orthogonal matrix, is a diagonal matrix with non-increasing and positive diagonal elements. In practical applications, the matrix The singularity will lead to the reduction of discriminative ability. Therefore, its zero eigenvalues and corresponding eigenvectors should be discarded. Based on the above considerations, first, using the transform Transform the original data into s 1 -dimensional space. So it gets transformed

(3.2c)在s1维的变换空间中进行相关的变换得到变换W2(3.2c) Perform relevant transformations in the s 1 -dimensional transformation space to obtain transformation W 2 .

在步骤(3.2b)中已经得到变换在所得到的变换空间中,类间散布矩阵和类内散布矩阵分别变为:Transformation has been obtained in step (3.2b) In the resulting transformed space, the between-class scatter matrix and the intraclass scatter matrix become respectively:

其中,于是就将原先的n×n维的类间散布矩阵和类内散布矩阵降低到s1×s1维。in, So the original n×n dimensional inter-class scatter matrix and intra-class scatter matrix are reduced to s 1 ×s 1 dimension.

现在,我们考虑的特征值分解:Now, we consider The eigenvalue decomposition of :

其中是正交矩阵,是一个对角矩阵。于是有:in is an orthogonal matrix, is a diagonal matrix. So there are:

在大多数的应用领域中,的秩要大于的秩,并且∑w是非奇异的,由于:In most application areas, rank is greater than The rank of , and ∑ w is non-singular because:

所以,有:So have:

因此,最优变换矩阵可以通过如下的公式得到:Therefore, the optimal transformation matrix It can be obtained by the following formula:

实际上,上述优化问题可以转化为如下特征值问题来求解:In fact, the above optimization problem It can be transformed into the following eigenvalue problem to solve:

而上述特征问题的解可通过求解广义特征值来得到。假设λ1,λ2,…,λt是特征值问题的t个按降序排列的最大特征值,w1,w2,…,wt是对应的特征向量。The solution to the above eigenproblem can be obtained by solving the generalized eigenvalues. Assume that λ 1 , λ 2 , ..., λ t are the t largest eigenvalues arranged in descending order of the eigenvalue problem, and w 1 , w 2 , ..., w t are the corresponding eigenvectors.

为了求解问题主要考虑两个步骤:第一步是通过奇异值分解(SVD)来最大化类间散布矩阵,第二步是求解广义特征值问题。而第一步的关键问题是处理如下优化问题:In order to solve the problem Two steps are mainly considered: the first step is to maximize the inter-class scatter matrix through singular value decomposition (SVD), and the second step is to solve the generalized eigenvalue problem. The key issue in the first step is to deal with the following optimization problems:

我们已知:是列正交矩阵,是对角元素非增且正的对角矩阵。因此,可得到Ub1是上述问题的解。We know: is a column-orthogonal matrix, is a diagonal matrix with non-increasing and positive diagonal elements. Therefore, it can be obtained that U b1 is the solution of the above problem.

另外,伪逆通常用来求解奇异矩阵。矩阵的伪逆可通过奇异值分解(SVD)来计算。使用伪逆的一个自然扩展是利用特征分解矩阵或者(来实现)。更具体地说,令M=U∑VT是M的奇异值分解,其中U和V是列正交矩阵,∑是对角元素为正的对角矩阵,然后M的伪逆为M+=V∑-1UT。基于上述的讨论,我们得到的最优变换矩阵 Additionally, pseudoinverses are often used to solve singular matrices. The pseudo-inverse of a matrix can be computed by singular value decomposition (SVD). A natural extension to using the pseudoinverse is to exploit the eigendecomposition matrix or (to fulfill). More specifically, let M=U∑V T be the singular value decomposition of M, where U and V are column-orthogonal matrices, and ∑ is a diagonal matrix with positive diagonal elements, then the pseudo-inverse of M is M + = V∑ -1 U T . Based on the above discussion, we get The optimal transformation matrix for

从而,基于上述变换和论证,我们得出在变换后的s1维子空间中,是一个最优变换矩阵。Thus, based on the above transformation and argumentation, we conclude that in the transformed s 1 -dimensional subspace, is an optimal transformation matrix.

步骤4.构造投影特征向量 Step 4. Construct projected feature vectors

对于面部测试图像I∈RM×N,基于Gabor的空间特征矩阵可以由公式 得到,并且由公式可以得到融合特征矩阵。因此,一个新的面部测试图像的Gabor方向块特征矩阵由公式 以及就可得到。For a face test image I∈RM ×N , the Gabor-based spatial feature matrix can be given by the formula obtained, and by the formula The fusion feature matrix can be obtained. Therefore, the Gabor-oriented block feature matrix of a new face test image by the formula as well as available.

步骤5.最后利用最近邻分类器进行识别。Step 5. Finally, use the nearest neighbor classifier for identification.

基于上述的说明,对于一张面部图像A(x,y),经过Gabor小波提取空间特征信息,然后与原始图像的光谱信息进行融合,再经过特征空间变换,在变换后的s1维子空间中,我们得到最优变换矩阵 并且得到了投影特征向量然后用最近邻分类器进行识别。最近邻分类器(NNc)是一个非参数方法的分类器,其主要思想是:假设训练样本集为X={(x1,l1),(x2,l2),…,(xn,ln)},其中li,i=1,2,…,n是类标,如果测试样本x和训练样本的k个样本x1,…,xk之间存在最小距离,则测试样本x属于li(i=1,2,…,k)类。Based on the above description, for a facial image A(x, y), the spatial feature information is extracted by Gabor wavelet, and then fused with the spectral information of the original image, and then the feature space is transformed, and the transformed s 1 -dimensional subspace , we get the optimal transformation matrix And get the projected eigenvector Then use the nearest neighbor classifier for identification. The nearest neighbor classifier (NNc) is a non-parametric method classifier, its main idea is: Suppose the training sample set is X={(x 1 ,l 1 ),(x 2 ,l 2 ),…,(x n ,l n )}, where l i ,i=1,2,…,n are class labels, if there is a minimum distance between the test sample x and the k samples x 1 ,…,x k of the training sample, then the test sample x belongs to class l i (i=1,2,...,k).

假设Ftest是测试图像,根据公式可以得到Gabor方向块特征矩阵xtest。由如下公式可知Ftest属于第i类:Suppose F test is the test image, according to the formula Gabor direction block feature matrix x test can be obtained. It can be seen from the following formula that F test belongs to the i-th category:

其中, in,

本发明的效果可通过以下仿真实验进一步说明。The effects of the present invention can be further illustrated by the following simulation experiments.

1.仿真条件及参数1. Simulation conditions and parameters

本发明实例是针对ORL、Yale以及FERET人脸数据库。ORL数据库包含40个人的400幅112×92大小的人脸图像,每人10幅。由于这些图像是在不同时间拍摄的,因此具有姿态、角度、尺度、表情和眼镜等的变化。图4(a)给出了本数据集的5张不同的面部图像。本实验中每个人的1张图像用于训练,剩余的9张图像用于测试;Yale人脸数据库包含来自15个人的165张图像,每人有11张图像。这些图像会随着面部表情和光照条件的变化而变化,例如:高兴、悲伤、吃惊、冷漠,带眼镜等。图4(b)给出了本数据集的5张不同的面部图像。在实验中,每张图像的大小设置为100×100,每个人的1张图像用于训练,剩下的图像用于测试;FERET人脸数据库是由美国国防部通过DARPA项目发起的。此数据集包含来自1199个具有姿态、面部表情等不同人的14051张面部灰度图像。图4(c)给出了本数据集的5张不同的面部图像。实验中,我们选择15个人的5张不同图像,一共是75张面部图像,并将图像大小调整为80×80。我们在上述三个人脸数据库上进行了50次实验,并与现有方法进行了比较,即关于单张人脸图像的基于SVD的Fisher线性判别分析方法、基于QRCP的Fisher线性判别分析方法以及基于SDD的Fisher线性判别分析方法。图2展示了5个不同尺度和8个不同方向的Gabor滤波器组。图3展示了单张面部图像上2个Gabor滤波的卷积结果。Examples of the present invention are directed at ORL, Yale and FERET face databases. The ORL database contains 400 face images of 112×92 size for 40 people, 10 for each person. Since these images were taken at different times, they have variations in pose, angle, scale, expression and glasses etc. Figure 4(a) presents 5 different face images of this dataset. In this experiment, 1 image of each person is used for training, and the remaining 9 images are used for testing; the Yale face database contains 165 images from 15 people, and each person has 11 images. These images change according to facial expressions and lighting conditions, such as: happy, sad, surprised, indifferent, wearing glasses, etc. Figure 4(b) presents 5 different facial images of this dataset. In the experiment, the size of each image is set to 100×100, 1 image of each person is used for training, and the remaining images are used for testing; the FERET face database was initiated by the US Department of Defense through the DARPA project. This dataset contains 14051 facial grayscale images from 1199 different people with poses, facial expressions, etc. Figure 4(c) presents 5 different face images of this dataset. In the experiment, we select 5 different images of 15 people, a total of 75 facial images, and resize the images to 80×80. We conducted 50 experiments on the above three face databases and compared them with the existing methods, namely Fisher's linear discriminant analysis method based on SVD on a single face image, Fisher's linear discriminant analysis method based on QRCP and based on Fisher's linear discriminant analysis method for SDD. Figure 2 shows Gabor filter banks with 5 different scales and 8 different directions. Figure 3 shows the convolution results of 2 Gabor filters on a single face image.

2.仿真内容及结果分析2. Simulation content and result analysis

仿真试验中,本发明方法与传统的基于SVD的Fisher线性判别分析法、基于QRCP的Fisher线性判别分析法以及基于SDD的Fisher线性判别分析法进行对比分析,实验在三个数据集上展开。In the simulation test, the method of the present invention is compared with the traditional Fisher linear discriminant analysis method based on SVD, the Fisher linear discriminant analysis method based on QRCP and the Fisher linear discriminant analysis method based on SDD, and the experiment is carried out on three data sets.

实验一:experiment one:

实验一是在ORL人脸数据库上按上述五步实施的,关于识别率的实验结果见图5,从图5中我们可以看出:在四种方法中,所提方法的最大识别率是76.67%,均高于其他方法(基于SVD的Fisher线性判别分析方法的最大识别率是56.67%、基于QRCP的Fisher线性判别分析方法的最大识别率是68.89%、基于SDD的Fisher线性判别分析方法的最大识别率是71.94%)。相比于其他三种方法,我们所提方法具有最大的识别率,并且获得了最好的识别性能。显然,所提方法的鲁棒性是利用了Gabor方向特征块及融合特征空间变换的原因。Experiment 1 was implemented on the ORL face database according to the above five steps. The experimental results on the recognition rate are shown in Figure 5. From Figure 5, we can see that among the four methods, the maximum recognition rate of the proposed method is 76.67 %, are higher than other methods (the maximum recognition rate of Fisher's linear discriminant analysis method based on SVD is 56.67%, the maximum recognition rate of Fisher's linear discriminant analysis method based on QRCP is 68.89%, the maximum recognition rate of Fisher's linear discriminant analysis method based on SDD The recognition rate is 71.94%). Compared with the other three methods, our proposed method has the largest recognition rate and achieves the best recognition performance. Obviously, the robustness of the proposed method is due to the utilization of Gabor direction feature blocks and fusion of feature space transformations.

实验二:Experiment 2:

实验二是在Yale人脸数据库上按上述五步进行实施的,图6给出了四种方法的识别率关于投影向量的变化关系,从图6中我们可以得出如下结论:所提方法随着投影向量个数的增加,识别率也逐渐增大,且识别性能逐渐增强。基于SVD的Fisher线性判别分析方法、基于QRCP的Fisher线性判别分析方法、基于SDD的Fisher线性判别分析方法以及所提方法的最大识别率分别是24.00%,38.67%,45.33%和64.67%。从识别率上来看,所提方法是最优的,基于SDD的Fisher线性判别分析方法和基于QRCP的Fisher线性判别分析方法两种方法是次优的,基于SVD方法的Fisher线性判别分析方法识别性能是最差的,这一点图6能够充分地说明以上观点The second experiment was implemented on the Yale face database according to the above five steps. Figure 6 shows the relationship between the recognition rate of the four methods and the variation of the projection vector. From Figure 6, we can draw the following conclusions: the proposed method varies with As the number of projection vectors increases, the recognition rate increases gradually, and the recognition performance gradually increases. The maximum recognition rates of Fisher's linear discriminant analysis method based on SVD, Fisher's linear discriminant analysis method based on QRCP, Fisher's linear discriminant analysis method based on SDD and the proposed method are 24.00%, 38.67%, 45.33% and 64.67%, respectively. From the perspective of recognition rate, the proposed method is optimal. The Fisher linear discriminant analysis method based on SDD and the Fisher linear discriminant analysis method based on QRCP are suboptimal. The Fisher linear discriminant analysis method based on SVD method has the best recognition performance. is the worst, this Figure 6 can fully illustrate the above point of view

实验三:Experiment three:

实验三是在FERET人脸数据库上仍旧按上述五步进行实施的。基于SVD的Fisher线性判别分析方法、基于QRCP的Fisher线性判别分析方法、基于SDD的Fisher线性判别分析方法以及所提方法的识别率与投影向量之间的关系见图7。从图7可以清晰地看到,所提方法的识别性能比其它三种方法要高,并且随着投影向量个数的增加其识别率也逐渐增加,这表现出了很好的识别性能。基于SVD的Fisher线性判别分析方法、基于QRCP的Fisher线性判别分析方法、基于SDD的Fisher线性判别分析方法以及所提方法的最好识别率分别为:88.83%,86.67%,93.33%和96.67%。Experiment 3 was implemented on the FERET face database according to the above five steps. The SVD-based Fisher linear discriminant analysis method, the QRCP-based Fisher linear discriminant analysis method, the SDD-based Fisher linear discriminant analysis method, and the relationship between the recognition rate and the projection vector of the proposed method are shown in Figure 7. It can be clearly seen from Figure 7 that the recognition performance of the proposed method is higher than that of the other three methods, and the recognition rate gradually increases with the increase of the number of projection vectors, which shows a good recognition performance. The best recognition rates of Fisher's linear discriminant analysis method based on SVD, Fisher's linear discriminant analysis method based on QRCP, Fisher's linear discriminant analysis method based on SDD and the proposed method are: 88.83%, 86.67%, 93.33% and 96.67%, respectively.

由上述三个实验表明,本发明基于Gabor特征提取和空间变换的单样本人脸识别方法在实际的人脸识别中,由于Gabor特征块对表情、姿态和光照变化而引起的局部扭曲具有鲁棒性,所以具有比较好的识别结果。表1给出了四种方法在三个不同数据集上的最大识别率(rr,%)和识别时间(t,s)。#1、#2和#3分别代表ORL、Yale和FERET数据集。The above three experiments show that the single-sample face recognition method based on Gabor feature extraction and spatial transformation of the present invention is robust to local distortions caused by changes in expression, posture and illumination due to Gabor feature blocks in actual face recognition. Therefore, it has better recognition results. Table 1 shows the maximum recognition rate (rr, %) and recognition time (t, s) of the four methods on three different data sets. #1, #2, and #3 represent the ORL, Yale, and FERET datasets, respectively.

表1四种方法在不同数据集上的最大识别率(rr,%)和识别时间(t,s).Table 1 The maximum recognition rate (rr, %) and recognition time (t, s) of the four methods on different data sets.

从表1可以看出,所提方法的识别率高于其他三种方法(即:基于SVD的Fisher线性判别分析方法、基于QRCP的Fisher线性判别分析方法、基于SDD的Fisher线性判别分析方法)的识别率,在识别时间上也远远少于其他三种方法。换句话说,在上述所有实验中,所提方法在ORL、Yale和FERET数据集上的运行时间要比基于SDD方法分别快大约22.28,98.08和52.60倍。It can be seen from Table 1 that the recognition rate of the proposed method is higher than that of the other three methods (namely: Fisher's linear discriminant analysis method based on SVD, Fisher's linear discriminant analysis method based on QRCP, and Fisher's linear discriminant analysis method based on SDD). The recognition rate is also far less than the other three methods in terms of recognition time. In other words, in all the above experiments, the running time of the proposed method on the ORL, Yale and FERET datasets is about 22.28, 98.08 and 52.60 times faster than the SDD-based method, respectively.

从实验结果图可以明显看出,本发明方法的识别率明显的高于基于SVD的Fisher线性判别分析方法、基于QRCP的Fisher线性判别分析方法和基于SDD的Fisher线性判别分析方法,且平均识别时间明显要低于其他三种算法。这种时间上的差异性主要是由图像矢量化而导致的。由此可见,本发明的人脸识别方法是一种非常有效并且有着很好鲁棒性的单样本人脸识别方法。As can be clearly seen from the experimental result figure, the recognition rate of the inventive method is obviously higher than the Fisher's linear discriminant analysis method based on SVD, the Fisher's linear discriminant analysis method based on QRCP and the Fisher's linear discriminant analysis method based on SDD, and the average recognition time significantly lower than the other three algorithms. This temporal difference is mainly caused by image vectorization. It can be seen that the face recognition method of the present invention is a very effective and robust single-sample face recognition method.

Claims (3)

1. A single-sample face recognition method based on Gabor feature extraction and spatial transformation comprises the following steps:
(1) extracting spatial information of a single image by using a Gabor wavelet:
(1.1) constructing a Gabor filter function: the invention adopts two-dimensional Gabor filtering to extract the spatial information of a single image, which is a Gaussian kernel function adjusted by a complex sinusoidal plane; is defined as:
wherein f is the central angular frequency of the complex sinusoidal plane wave, θ represents the normal parallel stripe direction of the Gabor function, φ is the phase, σ is the standard deviation, γ is the spatial ratio for specifying the supporting ellipse of the Gabor function;
(1.2) constructing a Gabor filter bank: since the Gabor filter bank is composed of a group of Gabor filters with different frequencies and directions, in the present invention, we use Gabor filter banks with five different scales and eight different directions, and the following two formulas give a Gabor filter bank with five different scales and eight different directions:
(1.3) Gabor representation of face image: for a face image a (x, y), its Gabor representation can be obtained by convolution of the original image and Gabor filtering, i.e.:
wherein G (x, y) represents the two-dimensional convolution result of Gabor filtering in different scales u and different directions v, and the size of G (x, y) is determined by a down-sampling factor ξ, and the zero mean and unit variance are carried out on the G (x, y) to obtain a filtering feature matrix Zu,v∈Rm*n
(1.4) constructing a Gabor direction block feature matrix: filtering characteristic matrix Z obtained in (1.3)u,vConversion into one-dimensional column vector, using Z0The Gabor direction block feature matrix representing five different scales and eight different directions of the face image a (x, y) is as follows:
wherein,is Zu,vOne-dimensional representation at the scale i, Z0∈R(m*n)×40Is a Gabor direction block feature matrix obtained from the convolution result G (x, y);
(2) and fusing spatial information extracted by Gabor wavelet and spectral information of the original image: obtaining a Gabor direction block feature matrix Z from the step (1)0∈R(m*n)×40On the other hand, since a single training sample image itself contains very important spectral information, the Gabor spatial feature vector and the spectral feature vector Y are combined0∈R(m*n)×41Performing fusion to obtain a fusion feature matrix F ∈ R(m*n)×41
Wherein σ1And σ2Are each Z0And Y0The standard deviation of (a), which can be obtained by calculating the square root of the variance of the feature vector;
(3) feature space transformation based on the fusion feature matrix:
(3.1) establishing a fusion characteristic optimization model
(3.2) performing feature space transformation to obtain a transformation matrix
(4) Constructing a projection feature vector;
(5) after the projected feature vectors are obtained, the nearest neighbor classifier is used for identification.
2. The single-sample face recognition method based on Gabor feature extraction and spatial transformation according to claim 1; the method is characterized in that: the specific process of the feature space transformation framework in the step (3.2) is as follows:
(3.2a) defining Gabor inter-class spreading matrices and intra-class spreading matrices:
suppose that n-dimensional training samples are obtained from the fused feature matrix, c is the number of classes, ni(i ═ 1,2, …, c) is the number of training samples of class i, and then the inter-Gabor scattering matrix and the intra-Gabor scattering matrix are defined as follows:
wherein,is the jth fused feature vector, f, from class iiIs the mean vector of class i, f0Is the mean vector of all training samples;
(3.2b) transforming the inter-class and intra-class scatter matrices to s1In the space of dimensions and obtaining a transformation W1
First considerSingular Value Decomposition (SVD)
Will UbThe block-shaped materials are divided into blocks,whereinTherefore, the temperature of the molten metal is controlled,
thus, the formulaCan be converted into the following forms:
wherein,is a matrix of orthogonal columns,is a diagonal matrix with non-increasing and positive diagonal elements; in practical application, the matrixThe singularity of (a) may cause a reduction in discrimination ability; therefore, its zero eigenvalue and corresponding eigenvector should be discarded; based on the above considerations, we utilize transformationsTransforming the original data to s1In the space of the dimension;
(3.2c) at s1Performing a correlated transformation in a transformation space of dimensions to obtain a final transformation W2
In the obtained s1Inter-class scatter matrices in dimension transform spaceThe following steps are changed:
intra-class scatter matrixThe following steps are changed:
wherein,
now, we considerDecomposition of characteristic values of (2):
whereinIs an orthogonal matrix in which the matrix is orthogonal,is a diagonal matrix; thus, there are:
in most of the fields of application,is greater thanAnd ∑, andwis non-exotic, due to:
therefore, there are:
thus, the optimal transformation matrixCan be obtained by the following formula:
3. the single-sample face recognition method based on Gabor feature extraction and spatial transformation according to claim 1; the method is characterized in that:
the specific method for constructing the projection feature vector in the step (4) comprises the following steps:
for test feature vector f ∈ Rn×1The projected feature vector can be obtained by linear transformation as follows
It is apparent that the computational complexity is significantly reduced, and the face test image I ∈ RM×NThe Gabor-based spatial feature matrix can be formulated fromIs obtained and expressed byCan be fusedA feature matrix; thus, a new Gabor directional block feature matrix for the face test imageThis can be obtained from the following equation:
then x is the Gabor directional block feature matrix we need, here denoted as
CN201611059543.6A 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation Active CN106778522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611059543.6A CN106778522B (en) 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611059543.6A CN106778522B (en) 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation

Publications (2)

Publication Number Publication Date
CN106778522A true CN106778522A (en) 2017-05-31
CN106778522B CN106778522B (en) 2020-08-04

Family

ID=58911568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611059543.6A Active CN106778522B (en) 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation

Country Status (1)

Country Link
CN (1) CN106778522B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403194A (en) * 2017-07-26 2017-11-28 广州慧扬健康科技有限公司 Cutaneum carcinoma image recognition visualization model based on t SNE
CN107798308A (en) * 2017-11-09 2018-03-13 石数字技术成都有限公司 A kind of face identification method based on short-sighted frequency coaching method
CN107886090A (en) * 2017-12-15 2018-04-06 苏州大学 A kind of single sample face recognition method, system, equipment and readable storage medium storing program for executing
CN114445720A (en) * 2021-12-06 2022-05-06 西安电子科技大学 A hyperspectral anomaly detection method based on spatial spectrum depth collaboration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2428916A2 (en) * 2010-09-09 2012-03-14 Samsung Electronics Co., Ltd. Method and apparatus to generate object descriptor using extended curvature gabor filter
CN102855468A (en) * 2012-07-31 2013-01-02 东南大学 Single sample face recognition method in photo recognition
CN103514443A (en) * 2013-10-15 2014-01-15 中国矿业大学 Single-sample face recognition transfer learning method based on LPP (Low Power Point) feature extraction
CN104239856A (en) * 2014-09-04 2014-12-24 电子科技大学 Face recognition method based on Gabor characteristics and self-adaptive linear regression
CA2931348A1 (en) * 2013-11-25 2015-05-28 Ehsan Fazl Ersi System and method for face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2428916A2 (en) * 2010-09-09 2012-03-14 Samsung Electronics Co., Ltd. Method and apparatus to generate object descriptor using extended curvature gabor filter
CN102855468A (en) * 2012-07-31 2013-01-02 东南大学 Single sample face recognition method in photo recognition
CN103514443A (en) * 2013-10-15 2014-01-15 中国矿业大学 Single-sample face recognition transfer learning method based on LPP (Low Power Point) feature extraction
CA2931348A1 (en) * 2013-11-25 2015-05-28 Ehsan Fazl Ersi System and method for face recognition
CN104239856A (en) * 2014-09-04 2014-12-24 电子科技大学 Face recognition method based on Gabor characteristics and self-adaptive linear regression

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
聂祥飞: "利用Gabor 小波变换解决人脸识别中的小样本问题", 《光学精密工程》 *
邹建法: "基于增强Gabor特征和直接分步线性判别分析的人脸识别", 《模式识别与人工智能》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403194A (en) * 2017-07-26 2017-11-28 广州慧扬健康科技有限公司 Cutaneum carcinoma image recognition visualization model based on t SNE
CN107403194B (en) * 2017-07-26 2020-12-18 广州慧扬健康科技有限公司 Skin cancer image recognition visualization system based on t-SNE
CN107798308A (en) * 2017-11-09 2018-03-13 石数字技术成都有限公司 A kind of face identification method based on short-sighted frequency coaching method
CN107798308B (en) * 2017-11-09 2020-09-22 一石数字技术成都有限公司 Face recognition method based on short video training method
CN107886090A (en) * 2017-12-15 2018-04-06 苏州大学 A kind of single sample face recognition method, system, equipment and readable storage medium storing program for executing
CN107886090B (en) * 2017-12-15 2021-07-30 苏州大学 A single-sample face recognition method, system, device and readable storage medium
CN114445720A (en) * 2021-12-06 2022-05-06 西安电子科技大学 A hyperspectral anomaly detection method based on spatial spectrum depth collaboration

Also Published As

Publication number Publication date
CN106778522B (en) 2020-08-04

Similar Documents

Publication Publication Date Title
Li et al. Overview of principal component analysis algorithm
CN103632138B (en) A kind of face identification method of low-rank piecemeal rarefaction representation
CN103646244B (en) Extraction, authentication method and the device of face characteristic
CN102855468B (en) A kind of single sample face recognition method in photograph identification
CN106778522B (en) Single-sample face recognition method based on Gabor feature extraction and spatial transformation
CN102324022B (en) Composite gradient vector-based face recognition method
CN107292299B (en) Side face recognition methods based on kernel specification correlation analysis
CN108647690A (en) The sparse holding projecting method of differentiation for unconstrained recognition of face
CN106503633A (en) The method for building up in face characteristic storehouse in a kind of video image
CN104966075B (en) A kind of face identification method and system differentiating feature based on two dimension
Beham et al. Face recognition using appearance based approach: A literature survey
CN108664941B (en) Nuclear sparse description face recognition method based on geodesic mapping analysis
Lei et al. Face recognition by discriminant analysis with Gabor tensor representation
Lei et al. Learning discriminant face descriptor for face recognition
CN108304833A (en) Face identification method based on MBLBP and DCT-BM2DPCA
Jiang et al. Feature extraction and recognition of face image based on 2DPCA with LDA algorithm
CN108830163B (en) A customs identity verification system and method based on local discrimination CCA
Gatto et al. Kernel two dimensional subspace for image set classification
Zhang et al. Local coordinates alignment (LCA): a novel manifold learning approach
Chaudhary et al. Large scale double density dual tree complex wavelet transform based robust feature extraction for face recognition
Chen et al. Fusing multiple features for Fourier Mellin-based face recognition with single example image per person
Yang et al. Local descriptor margin projections (LDMP) for face recognition
Yu et al. Principal patterns of fractional-order differential gradients for face recognition
Zhu et al. Discriminant analysis with Gabor phase for robust face recognition
Ji et al. Fractional-order embedding supervised canonical correlations analysis with applications to feature extraction and recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210923

Address after: 710016 B 1616, Tiandi Times Plaza, Fengcheng two road, Weiyang District, Xi'an, Shaanxi.

Patentee after: Liu Jiaojiao

Address before: No. 1800 road 214122 Jiangsu Lihu Binhu District City of Wuxi Province

Patentee before: Jiangnan University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221026

Address after: No.06, 20th Floor, Building C, Huihai Square, Chuangye Road, Longhua Street, Longhua District, Shenzhen, Guangdong 518109

Patentee after: SHENZHEN QIANKEDUO INFORMATION TECHNOLOGY CO.,LTD.

Address before: 710016 B 1616, Tiandi Times Plaza, Fengcheng two road, Weiyang District, Xi'an, Shaanxi.

Patentee before: Liu Jiaojiao

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231220

Address after: W338, 3rd Floor, Port Building, Shipping Center, No. 1167 Yihai Avenue, Nanshan Street, Qianhai Shenzhen Hong Kong Cooperation Zone, Shenzhen, Guangdong Province, 518000

Patentee after: Shenzhen Huiyouba Technology Co.,Ltd.

Address before: No.06, 20th Floor, Building C, Huihai Square, Chuangye Road, Longhua Street, Longhua District, Shenzhen, Guangdong 518109

Patentee before: SHENZHEN QIANKEDUO INFORMATION TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right