CN107563334B - Face recognition method based on identification linear representation preserving projection - Google Patents

Face recognition method based on identification linear representation preserving projection Download PDF

Info

Publication number
CN107563334B
CN107563334B CN201710800209.XA CN201710800209A CN107563334B CN 107563334 B CN107563334 B CN 107563334B CN 201710800209 A CN201710800209 A CN 201710800209A CN 107563334 B CN107563334 B CN 107563334B
Authority
CN
China
Prior art keywords
training sample
representing
linear representation
projection
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710800209.XA
Other languages
Chinese (zh)
Other versions
CN107563334A (en
Inventor
刘茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201710800209.XA priority Critical patent/CN107563334B/en
Publication of CN107563334A publication Critical patent/CN107563334A/en
Application granted granted Critical
Publication of CN107563334B publication Critical patent/CN107563334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention discloses a face recognition method based on identification linear representation preserving projection, which linearly represents each training sample by using other training samples of the same type and carries out identification analysis on all the training samples and the linear representations thereof. Compared with the prior art, the method can greatly reduce the calculation time and effectively improve the identification result.

Description

Face recognition method based on identification linear representation preserving projection
Technical Field
The invention particularly relates to a face recognition method based on identification linear representation preserving projection, and belongs to the technical field of face recognition.
Background
(1) Sparse preserving projection method (SPP, L.Qiao, S.Chen, X.Tan, "sparse rendering projects with Applications to Face registration", Pattern registration, vol.43, No.1, pp.331-341,2010):
let X ═ X1,X2,...,XN]Representing a training sample set comprising N samples, xi∈Rd(RdA set of real vectors representing d dimensions) represents the ith training sample.
SPP first obtains a training sample x by solving the following problemiCoefficient of sparsity αi=[α1i2i,…,αNi]T∈RN
Figure GDA0002405376540000011
Where > 0 is a relatively small positive real number for controlling the error of the sparse reconstruction, E ∈ RNIs a column vector with all element values 1, αii0. Then, SPP obtains the optimal linear projection vector u by solving the following problem:
Figure GDA0002405376540000012
(2) the shortcomings and improvements of the sparse reservation projection method are as follows:
the sparse preserving projection method has two problems: (a) the time complexity of calculating the sparse coefficient is very high, the calculation time increases exponentially along with the increase of the number of the training samples, and according to the principle of sparse representation, the number of the training samples at least needs to be closer to d, so that the condition that the requirement of | x is met under the condition of smaller number can be ensuredi-Xαi||<However, d is usually a comparisonLarge numbers; (b) the sparse preservation projection method is an unsupervised linear projection method, and the recognition effect is generally lower than that of the supervised method.
Sparseness factor αiThe non-zero coefficients in (1) mainly correspond to the training sample xiOther training samples of the same class, which is the principle of sparse representation classification. Face recognition method based on identification linear representation preserving projection uses and trains sample xiOther training samples of the same kind linearly represent the training sample xiAnd performing a discriminant analysis on all training samples and their linear representations. Compared with a sparse preservation projection method, on one hand, the face recognition method based on the identification linear representation preservation projection only needs to calculate linear representation coefficients of a small number of similar training samples, so that the calculation time can be greatly reduced; on the other hand, the face recognition method based on the identification linear representation preserving projection uses a supervised identification analysis technology, and can effectively improve the recognition result.
Disclosure of Invention
The face recognition method based on the identification linear representation preserving projection linearly represents each training sample by using other training samples of the same type, and performs identification analysis on all the training samples and linear representations thereof. Compared with a sparse reservation projection method, the face recognition method based on the identification linear representation reservation projection can greatly reduce the calculation time and effectively improve the recognition result.
Simulation experiments were performed on Face Recognition Grade Challenge (FRGC) version 2Experiment 4 Face database (p.j.phillips, p.j.flynn, t.scruggs, k.bowyer, j.chang, k.hoffman, j.marques, j.min, w.work, "Overview of the Face Recognition grade Challenge", ieee conf.computer Vision and Pattern Recognition, vol.1, pp.947-954,2005), demonstrating that the effectiveness of the Face Recognition method of preserving projection is expressed based on discriminant linearity.
The technical scheme is as follows:
let X ═ X1,X2,...,Xc]Representing a training sample set containing c classes,
Figure GDA0002405376540000021
training sample set, X, representing the ith classiContaining NiA sample, xij∈RdJ-th training sample, R, representing the i-th classdA set of real vectors representing the d dimension,
Figure GDA0002405376540000022
represents the total number of samples of the training sample set, y ∈ RdRepresenting a sample to be identified.
The steps of the face recognition method based on the identification linear representation preserving projection are as follows:
in a first step, a training sample x is obtained by solving the following problemijIs linear representation coefficient
Figure GDA0002405376540000023
Figure GDA0002405376540000024
Wherein, it is made
Figure GDA0002405376540000025
And secondly, carrying out differential analysis on the training sample and the linear representation thereof:
Figure GDA0002405376540000026
wherein, v ∈ RdIs a linear projection vector;
equation (2) can be converted into
Figure GDA0002405376540000027
Wherein, P ═ I [ (NI-I)c)+A(NI-Ic)AT]-[(E-Ec)AT+A(E-Ec)],Q=(Ic+AIcAT)-(EcAT+AEc),I∈RN×NIs a unit array which is composed of a plurality of unit arrays,E∈RN×Nis a square matrix with element values all being 1,
Figure GDA0002405376540000031
Figure GDA0002405376540000032
is a unit array which is composed of a plurality of unit arrays,
Figure GDA0002405376540000033
is a square matrix with element values all being 1,
Figure GDA0002405376540000034
Figure GDA0002405376540000035
satisfy the requirement of
Figure GDA0002405376540000036
Solution v of equation (3)*By pair matrix (XQX)T)-1XPXTPerforming characteristic decomposition to obtain;
third, when (XQX) has been obtainedT)-1XPXTEigenvectors v corresponding to the first m largest eigenvalues of the matrixk(k-1, 2, …, m), where m is a tunable parameter, let V-V1,v2,…,vm]Obtaining the training sample characteristic set after projection
ZX=VTX (4)
And sample characteristics to be identified
Zy=VTy (5)
Calculating zyThe distance to each training sample feature, assigns y to the class in which the training sample with the smallest distance is located.
Advantageous effects
Compared with the prior art, the invention adopting the technical scheme has the following beneficial effects:
the invention provides a face recognition method based on identification linear representation preserving projection, which is characterized in that other training samples of the same type are used for each training sample to linearly represent the training sample, and all the training samples and the linear representations thereof are subjected to identification analysis. Compared with the prior art, the method can greatly reduce the calculation time and effectively improve the identification result.
Drawings
FIG. 1 is an example picture of a human face;
fig. 2 is a graph showing the fluctuation of the recognition rate of 20 random tests.
Detailed Description
The technical solution of the present invention is specifically described below with reference to the accompanying drawings.
The Face Recognition Grand Challenge (FRGC) version 2 Experimental 4 Face database (P.J. Phillips, P.J. Flynn, T.Scruggs, K.Bowyer, J.Chang, K.Hoffman, J.Marques, J.Min, W.Worek, "Overview of the Face Recognition Grandchange", IEEE Conf.computer Vision and Pattern Recognition, vol.1, pp.947-954,2005) was selected for experimental verification. The database is large in size and comprises three sub-libraries of train, target and query, wherein the train sub-library comprises 12776 pictures of 222 persons, the target sub-library comprises 16028 pictures of 466 persons, and the query sub-library comprises 8014 pictures of 466 persons. The experiment selected 100 people from the training set, each with 36 images. All selected images are converted from original color images into gray images, corrected (the two eyes are in horizontal positions), scaled and cut, and each image sample only retains a face with the size of 60 multiplied by 60 and a nearby area. An example picture of a processed face is shown in fig. 1.
In an experimental database, 18 human face image samples are randomly selected from each category as training samples, the rest samples are used as samples to be identified, and random tests are carried out for 20 times.
Fig. 2 and table 1 show the recognition effect of the sparse preserving projection method (i.e., SPP method in graph) and the face recognition method based on discriminating linear representation preserving projection (i.e., DLRPP method in graph) 20 random tests. In fig. 2, the abscissa is the random test number, and the ordinate is the recognition rate (i.e., the number of correctly recognized samples to be recognized/the total number of samples to be recognized). Table 1 shows the recognition rate mean and standard deviation, and the average training time for 20 random tests in the two methods. Compared with a sparse preserving projection method, the face recognition method based on the identification linear representation preserving projection has the advantages that the recognition effect is remarkably improved, and the training time is greatly reduced. This verifies the effectiveness of face recognition methods that preserve projections based on discriminating linear representations.
TABLE 1
Name of method Recognition rate (mean and standard deviation,%) Average training time(s)
SPP 76.52±4.60 3446.84
DLRPP 91.31±1.84 2.62

Claims (1)

1. A face recognition method based on discriminative linear representation preserving projection is characterized in that,
let X ═ X1,X2,...,Xc]Representing a training sample set containing c classes,
Figure FDA0002405376530000011
training sample set, X, representing the ith classiContaining NiA sample, xij∈RdJ-th training sample, R, representing the i-th classdA set of real vectors representing the d dimension,
Figure FDA0002405376530000012
represents the total number of samples of the training sample set, y ∈ RdRepresenting a sample to be identified;
the method comprises the following specific steps:
in a first step, a training sample x is obtained by solving the following problemijIs linear representation coefficient
Figure FDA0002405376530000013
Figure FDA0002405376530000014
Wherein, it is made
Figure FDA0002405376530000015
And secondly, carrying out differential analysis on the training sample and the linear representation thereof:
Figure FDA0002405376530000016
wherein, v ∈ RdIs a linear projection vector;
equation (2) to
Figure FDA0002405376530000017
Wherein, P ═ I [ (NI-I)c)+A(NI-Ic)AT]-[(E-Ec)AT+A(E-Ec)],Q=(Ic+AIcAT)-(EcAT+AEc),I∈RN ×NIs a unit array, E ∈ RN×NIs a square matrix with element values all being 1,
Figure FDA0002405376530000018
Figure FDA0002405376530000019
is a unit array which is composed of a plurality of unit arrays,
Figure FDA00024053765300000110
is a square matrix with element values all being 1,
Figure FDA00024053765300000111
Figure FDA00024053765300000112
satisfy the requirement of
Figure FDA00024053765300000113
Solution v of equation (3)*By pair matrix (XQX)T)-1XPXTPerforming characteristic decomposition to obtain;
third, when (XQX) has been obtainedT)-1XPXTEigenvectors v corresponding to the first m largest eigenvalues of the matrixk(k-1, 2, …, m), where m is a tunable parameter, let V-V1,v2,…,vm]Obtaining the training sample characteristic set after projection
ZX=VTX (4)
And sample characteristics to be identified
Zy=VTy (5)
Calculating zyThe distance to each training sample feature, assigns y to the class in which the training sample with the smallest distance is located.
CN201710800209.XA 2017-09-07 2017-09-07 Face recognition method based on identification linear representation preserving projection Active CN107563334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710800209.XA CN107563334B (en) 2017-09-07 2017-09-07 Face recognition method based on identification linear representation preserving projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710800209.XA CN107563334B (en) 2017-09-07 2017-09-07 Face recognition method based on identification linear representation preserving projection

Publications (2)

Publication Number Publication Date
CN107563334A CN107563334A (en) 2018-01-09
CN107563334B true CN107563334B (en) 2020-08-11

Family

ID=60979503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710800209.XA Active CN107563334B (en) 2017-09-07 2017-09-07 Face recognition method based on identification linear representation preserving projection

Country Status (1)

Country Link
CN (1) CN107563334B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084163B (en) * 2019-04-18 2020-06-30 南京信息工程大学 Face recognition method based on multi-view local linear representation preserving, identifying and embedding
CN110046582B (en) * 2019-04-18 2020-06-02 南京信息工程大学 Color face recognition method based on multi-view discrimination linear representation preserving projection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379602B2 (en) * 2002-07-29 2008-05-27 Honda Giken Kogyo Kabushiki Kaisha Extended Isomap using Fisher Linear Discriminant and Kernel Fisher Linear Discriminant
CN105893947A (en) * 2016-03-29 2016-08-24 江南大学 Bi-visual-angle face identification method based on multi-local correlation characteristic learning
CN106056088A (en) * 2016-06-03 2016-10-26 西安电子科技大学 Single-sample face recognition method based on self-adaptive virtual sample generation criterion
CN106097250A (en) * 2016-06-22 2016-11-09 江南大学 A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379602B2 (en) * 2002-07-29 2008-05-27 Honda Giken Kogyo Kabushiki Kaisha Extended Isomap using Fisher Linear Discriminant and Kernel Fisher Linear Discriminant
CN105893947A (en) * 2016-03-29 2016-08-24 江南大学 Bi-visual-angle face identification method based on multi-local correlation characteristic learning
CN106056088A (en) * 2016-06-03 2016-10-26 西安电子科技大学 Single-sample face recognition method based on self-adaptive virtual sample generation criterion
CN106097250A (en) * 2016-06-22 2016-11-09 江南大学 A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation;Yong Xu等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20160228;全文 *
彩色人脸图像特征提取方法研究;刘茜;《中国博士学位论文全文数据库》;20160415;全文 *

Also Published As

Publication number Publication date
CN107563334A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN107392190B (en) Color face recognition method based on semi-supervised multi-view dictionary learning
WO2018149133A1 (en) Method and system for face recognition by means of dictionary learning based on kernel non-negative matrix factorization, and sparse feature representation
CN105956611B (en) Based on the SAR image target recognition method for identifying non-linear dictionary learning
CN107238822B (en) Method for extracting orthogonal nonlinear subspace characteristics of true and false target one-dimensional range profile
CN104281855B (en) Hyperspectral image classification method based on multi-task low rank
CN111160189A (en) Deep neural network facial expression recognition method based on dynamic target training
CN105574475B (en) A kind of rarefaction representation classification method based on common vector dictionary
US9330332B2 (en) Fast computation of kernel descriptors
CN105447884A (en) Objective image quality evaluation method based on manifold feature similarity
CN102609681A (en) Face recognition method based on dictionary learning models
CN106980848A (en) Facial expression recognizing method based on warp wavelet and sparse study
CN107563334B (en) Face recognition method based on identification linear representation preserving projection
CN112967210B (en) Unmanned aerial vehicle image denoising method based on full convolution twin network
CN107194314B (en) Face recognition method fusing fuzzy 2DPCA and fuzzy 2DLDA
CN107480623A (en) The neighbour represented based on cooperation keeps face identification method
CN104715266B (en) The image characteristic extracting method being combined based on SRC DP with LDA
CN107506744B (en) Face recognition method based on local linear representation preserving identification embedding
CN105740787B (en) Identify the face identification method of color space based on multicore
CN105975940A (en) Palm print image identification method based on sparse directional two-dimensional local discriminant projection
CN106056131A (en) Image feature extraction method based on LRR-LDA
On et al. Analysis of sparse PCA using high dimensional data
CN106650769A (en) Linear representation multi-view discrimination dictionary learning-based classification method
CN106446840B (en) Color face recognition method based on canonical correlation Multiple Kernel Learning
Hill et al. Aging the human face-a statistically rigorous approach
CN109063750A (en) SAR target classification method based on CNN and SVM decision fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant