CN107563334A - Based on the face identification method for differentiating linear expression retaining projection - Google Patents
Based on the face identification method for differentiating linear expression retaining projection Download PDFInfo
- Publication number
- CN107563334A CN107563334A CN201710800209.XA CN201710800209A CN107563334A CN 107563334 A CN107563334 A CN 107563334A CN 201710800209 A CN201710800209 A CN 201710800209A CN 107563334 A CN107563334 A CN 107563334A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msup
- munderover
- training sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses based on the face identification method for differentiating linear expression retaining projection, this method carrys out the linear expression training sample to each training sample using its other similar training sample, and carries out discriminatory analysis to all training samples and its linear expression.The present invention can greatly reduce compared to prior art calculates the time, effectively improves recognition result.
Description
Technical field
Present invention relates particularly to based on the face identification method for differentiating linear expression retaining projection, belong to face recognition technology
Field.
Background technology
(1) sparse retaining projection method (SPP, L.Qiao, S.Chen, X.Tan, " Sparsity Preserving
Projections with Applications to Face Recognition”,Pattern Recognition,
vol.43,no.1,pp.331-341,2010):
If X=[X1,X2,...,XN] represent to include the training sample set of N number of sample, xi∈Rd(RdRepresent the real vector of d dimensions
Set) represent i-th of training sample.
SPP obtains training sample x by solving the problem of following firstiSparse coefficient αi=[α1i,α2i,…,αNi]T∈
RN:
Wherein, ε > 0 are a smaller arithmetic numbers, for controlling the error of sparse reconstruct;E∈RNIt is one all
Element value is all 1 column vector;αii=0.Then, SPP obtains optimum linearity projection vector u by solving the problem of following:
(2) deficiency of sparse retaining projection method, improvement:
Sparse retaining projection method has two:(a) it is very high to calculate the time complexity of sparse coefficient, with training
The growth of number of samples, calculating the time, exponentially rank increases, and according to the principle of rarefaction representation, training sample number at least than
D is closer to, just can guarantee that in the case where ε is smaller, is met | | xi-Xαi| | < ε, but d is generally bigger numeral;
(b) sparse retaining projection method is unsupervised linear projection method, and recognition effect is usually less than the method for having supervision.
Sparse coefficient αiIn nonzero coefficient mainly correspond to and training sample xiOther similar training samples, this is dilute
Dredge the principle of presentation class.Based on the face identification method use and training sample x for differentiating linear expression retaining projectioniSimilar
Other training samples carry out linear expression training sample xi, and discriminatory analysis is carried out to all training samples and its linear expression.With
Sparse retaining projection method is compared, on the one hand, only needs to calculate based on the face identification method for differentiating linear expression retaining projection
The linear expression coefficient of a small amount of similar training sample, can greatly reduce and calculate the time;On the other hand, based on discriminating linear expression
The face identification method of retaining projection has used the discriminatory analysis technology of supervision, can effectively improve recognition result.
The content of the invention
Its similar other is used to each training sample based on the face identification method for differentiating linear expression retaining projection
Training sample carrys out the linear expression training sample, and carries out discriminatory analysis to all training samples and its linear expression.With it is sparse
Retaining projection method is compared, and can be greatly reduced based on the face identification method for differentiating linear expression retaining projection and be calculated the time,
Effectively improve recognition result.
In the face numbers of 2 Experiment of Face Recognition Grand Challenge (FRGC) version 4
According to storehouse (P.J.Phillips, P.J.Flynn, T.Scruggs, K.Bowyer, J.Chang, K.Hoffman, J.Marques,
J.Min,W.Worek,“Overview of the Face Recognition Grand Challenge”,IEEE
Conf.Computer Vision and Pattern Recognition, vol.1, pp.947-954,2005) on to do emulation real
Test, it was demonstrated that the validity based on the face identification method for differentiating linear expression retaining projection.
Technical scheme is as follows:
If X=[X1,X2,...,Xc] represent to include the training sample set of c classification, Xi=[xi1,xi2,…,xiNi] represent
The training sample set of i-th of classification, XiInclude NiIndividual sample, xij∈Rd(RdRepresent the real vector set of d dimensions) represent the i-th class
J-th of training sample,y∈RdRepresent a sample to be identified.
It is as follows based on the face identification method step for differentiating linear expression retaining projection:
The first step, training sample x is obtained by solving the problem of followingijOne group of linear expression coefficient
Wherein, make
Second step, discriminatory analysis is carried out to training sample and its linear expression:
Wherein, v ∈ RdIt is linear projection vector;
Formula (2) can be converted to
Wherein, P=[(NI-Ic)+A(NI-Ic)AT]-[(E-Ec)AT+A(E-Ec)], Q=(Ic+AIcAT)-(EcAT+AEc),
I∈RN×NIt is a unit matrix, E ∈ RN×NIt is the square formation that an element value is all 1,
It is a unit matrix,It is the square formation that an element value is all 1, Meet
The solution v of formula (3)*By to matrix (XQXT)-1XPXTFeature decomposition is carried out to obtain;
3rd step, when having obtained (XQXT)-1XPXTCharacteristic vector v corresponding to the preceding m eigenvalue of maximum of matrixk(k=
1,2 ..., m) when, m is an adjustable parameter here, makes V=[v1,v2,…,vm], the training sample after being projected is special
Collect ZX=VTX and sample characteristics Z to be identifiedy=VTy.Calculate zyTo the distance of each training sample feature, y is grouped into distance
Class where that minimum training sample.
Beneficial effect
The present invention compared with prior art, is had the advantages that using above technical scheme:
The present invention is provided based on the face identification method for differentiating linear expression retaining projection, and it is used to each training sample
Other similar training samples carry out the linear expression training sample, and all training samples and its linear expression differentiate point
Analysis.The present invention can greatly reduce compared to prior art calculates the time, effectively improves recognition result.
Brief description of the drawings
Fig. 1 is face sample picture;
Fig. 2 is 20 random test discrimination wave patterns.
Embodiment
Technical scheme is illustrated below in conjunction with accompanying drawing.
Face Recognition Grand Challenge (FRGC) version 2 is selected in experimental verification
The face databases of Experiment 4 (P.J.Phillips, P.J.Flynn, T.Scruggs, K.Bowyer, J.Chang,
K.Hoffman,J.Marques,J.Min,W.Worek,“Overview of the Face Recognition Grand
Challenge”,IEEE Conf.Computer Vision and Pattern Recognition,vol.1,pp.947-
954,2005).The database size is larger, contains tri- word banks of training, target, query, training word bank bags
12776 pictures containing 222 people, target word banks include 16028 pictures of 466 people, and query word banks include 466
8014 pictures of people.100 people that training gathers, everyone 36 width images have been selected in experiment.All images chosen are all
Gray level image is converted into by original color image, and corrected and (makes two to be horizontal), scaled and cut, each
Image pattern only retains the face and near zone of 60 × 60 sizes.Face sample picture after processing is shown in Fig. 1.
In experimental data base, each classification randomly chooses 18 facial image samples as training sample, remaining sample
As sample to be identified, 20 random tests are carried out.
Fig. 2 and table 1 are shown sparse retaining projection method (the SPP methods i.e. in chart) and protected based on discriminating linear expression
Stay the recognition effect of face identification method (the DLRPP methods i.e. in chart) 20 random tests of projection.In fig. 2, horizontal seat
Mark is the sequence number of random test, and ordinate is the discrimination (=number of samples to be identified correctly identified/total sample number to be identified).
Table 1 gives the discrimination average and standard deviation of two methods, 20 random tests, and average workout times.With sparse reservation
Projecting method is compared, and the recognition effect based on the face identification method for differentiating linear expression retaining projection is significantly increased, and is instructed
The white silk time greatly reduces.This demonstrates the validity based on the face identification method for differentiating linear expression retaining projection.
Table 1
Method name | Discrimination (average and standard deviation, %) | Average workout times (s) |
SPP | 76.52±4.60 | 3446.84 |
DLRPP | 91.31±1.84 | 2.62 |
Claims (1)
1. based on the face identification method for differentiating linear expression retaining projection, it is characterised in that
If X=[X1,X2,...,Xc] represent to include the training sample set of c classification,Represent i-th of classification
Training sample set, XiInclude NiIndividual sample, xij∈Rd, RdThe real vector set of d dimensions is represented, represents j-th of training of the i-th class
Sample,y∈RdRepresent a sample to be identified;
Comprise the following steps that:
The first step, training sample x is obtained by solving the problem of followingijOne group of linear expression coefficient
<mrow>
<munder>
<mi>min</mi>
<msub>
<mi>&beta;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>X</mi>
<mi>i</mi>
</msub>
<msub>
<mi>&beta;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>2</mn>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, make
Second step, discriminatory analysis is carried out to training sample and its linear expression:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
<mi>v</mi>
</munder>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>c</mi>
</munderover>
<munderover>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
</munder>
<mrow>
<mi>j</mi>
<mo>&NotEqual;</mo>
<mi>i</mi>
</mrow>
<mi>c</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>p</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>N</mi>
<mi>i</mi>
</msub>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>N</mi>
<mi>j</mi>
</msub>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msup>
<mi>v</mi>
<mi>T</mi>
</msup>
<msub>
<mi>x</mi>
<mrow>
<mi>i</mi>
<mi>p</mi>
</mrow>
</msub>
<mo>-</mo>
<msup>
<mi>v</mi>
<mi>T</mi>
</msup>
<msub>
<mi>X</mi>
<mi>j</mi>
</msub>
<msub>
<mi>&beta;</mi>
<mrow>
<mi>j</mi>
<mi>q</mi>
</mrow>
</msub>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mn>2</mn>
<mn>2</mn>
</msubsup>
</mrow>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>c</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>p</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>N</mi>
<mi>i</mi>
</msub>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>N</mi>
<mi>j</mi>
</msub>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msup>
<mi>v</mi>
<mi>T</mi>
</msup>
<msub>
<mi>x</mi>
<mrow>
<mi>i</mi>
<mi>p</mi>
</mrow>
</msub>
<mo>-</mo>
<msup>
<mi>v</mi>
<mi>T</mi>
</msup>
<msub>
<mi>X</mi>
<mi>i</mi>
</msub>
<msub>
<mi>&beta;</mi>
<mrow>
<mi>i</mi>
<mi>q</mi>
</mrow>
</msub>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mn>2</mn>
<mn>2</mn>
</msubsup>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, v ∈ RdIt is linear projection vector;
Formula (2) is converted to
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
<mi>v</mi>
</munder>
<mfrac>
<mrow>
<msup>
<mi>v</mi>
<mi>T</mi>
</msup>
<msup>
<mi>XPX</mi>
<mi>T</mi>
</msup>
<mi>v</mi>
</mrow>
<mrow>
<msup>
<mi>v</mi>
<mi>T</mi>
</msup>
<msup>
<mi>XQX</mi>
<mi>T</mi>
</msup>
<mi>v</mi>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, P=[(NI-Ic)+A(NI-Ic)AT]-[(E-Ec)AT+A(E-Ec)], Q=(Ic+AIcAT)-(EcAT+AEc), I ∈ RN ×NIt is a unit matrix, E ∈ RN×NIt is the square formation that an element value is all 1,
It is a unit matrix,It is the square formation that an element value is all 1, Meet
The solution v of formula (3)*By to matrix (XQXT)-1XPXTFeature decomposition is carried out to obtain;
3rd step, when having obtained (XQXT)-1XPXTCharacteristic vector v corresponding to the preceding m eigenvalue of maximum of matrixk(k=1,
2 ..., m) when, m is an adjustable parameter here, makes V=[v1,v2,…,vm], the training sample feature set Z after being projectedX
=VTX and sample characteristics Z to be identifiedy=VTy;Calculate zyTo the distance of each training sample feature, it is minimum that y is grouped into distance
That training sample where class.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710800209.XA CN107563334B (en) | 2017-09-07 | 2017-09-07 | Face recognition method based on identification linear representation preserving projection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710800209.XA CN107563334B (en) | 2017-09-07 | 2017-09-07 | Face recognition method based on identification linear representation preserving projection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107563334A true CN107563334A (en) | 2018-01-09 |
CN107563334B CN107563334B (en) | 2020-08-11 |
Family
ID=60979503
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710800209.XA Active CN107563334B (en) | 2017-09-07 | 2017-09-07 | Face recognition method based on identification linear representation preserving projection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107563334B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110046582A (en) * | 2019-04-18 | 2019-07-23 | 南京信息工程大学 | Identify the color face recognition method of linear expression retaining projection based on multiple view |
CN110084163A (en) * | 2019-04-18 | 2019-08-02 | 南京信息工程大学 | It indicates to retain the face identification method for identifying insertion based on multiple view local linear |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7379602B2 (en) * | 2002-07-29 | 2008-05-27 | Honda Giken Kogyo Kabushiki Kaisha | Extended Isomap using Fisher Linear Discriminant and Kernel Fisher Linear Discriminant |
CN105893947A (en) * | 2016-03-29 | 2016-08-24 | 江南大学 | Bi-visual-angle face identification method based on multi-local correlation characteristic learning |
CN106056088A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Single-sample face recognition method based on self-adaptive virtual sample generation criterion |
CN106097250A (en) * | 2016-06-22 | 2016-11-09 | 江南大学 | A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation |
-
2017
- 2017-09-07 CN CN201710800209.XA patent/CN107563334B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7379602B2 (en) * | 2002-07-29 | 2008-05-27 | Honda Giken Kogyo Kabushiki Kaisha | Extended Isomap using Fisher Linear Discriminant and Kernel Fisher Linear Discriminant |
CN105893947A (en) * | 2016-03-29 | 2016-08-24 | 江南大学 | Bi-visual-angle face identification method based on multi-local correlation characteristic learning |
CN106056088A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Single-sample face recognition method based on self-adaptive virtual sample generation criterion |
CN106097250A (en) * | 2016-06-22 | 2016-11-09 | 江南大学 | A kind of based on the sparse reconstructing method of super-resolution differentiating canonical correlation |
Non-Patent Citations (2)
Title |
---|
YONG XU等: "Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
刘茜: "彩色人脸图像特征提取方法研究", 《中国博士学位论文全文数据库》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110046582A (en) * | 2019-04-18 | 2019-07-23 | 南京信息工程大学 | Identify the color face recognition method of linear expression retaining projection based on multiple view |
CN110084163A (en) * | 2019-04-18 | 2019-08-02 | 南京信息工程大学 | It indicates to retain the face identification method for identifying insertion based on multiple view local linear |
CN110046582B (en) * | 2019-04-18 | 2020-06-02 | 南京信息工程大学 | Color face recognition method based on multi-view discrimination linear representation preserving projection |
Also Published As
Publication number | Publication date |
---|---|
CN107563334B (en) | 2020-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102609681B (en) | Face recognition method based on dictionary learning models | |
CN107515895A (en) | A kind of sensation target search method and system based on target detection | |
CN104573729B (en) | A kind of image classification method based on core principle component analysis network | |
US9330332B2 (en) | Fast computation of kernel descriptors | |
CN102073880A (en) | Integration method for face recognition by using sparse representation | |
CN107392190A (en) | Color face recognition method based on semi-supervised multi views dictionary learning | |
CN106845528A (en) | A kind of image classification algorithms based on K means Yu deep learning | |
CN106909946A (en) | A kind of picking system of multi-modal fusion | |
CN107341510B (en) | Image clustering method based on sparse orthogonality double-image non-negative matrix factorization | |
Wang et al. | Fruit classification model based on improved Darknet53 convolutional neural network | |
CN107133640A (en) | Image classification method based on topography's block description and Fei Sheer vectors | |
CN105740891A (en) | Target detection method based on multilevel characteristic extraction and context model | |
CN103473545A (en) | Text-image similarity-degree measurement method based on multiple features | |
CN107832786A (en) | A kind of recognition of face sorting technique based on dictionary learning | |
CN104091181A (en) | Injurious insect image automatic recognition method and system based on deep restricted Boltzmann machine | |
CN108021950A (en) | The image classification method represented based on low-rank sparse | |
CN102495876A (en) | Nonnegative local coordinate factorization-based clustering method | |
CN106599833A (en) | Field adaptation and manifold distance measurement-based human face identification method | |
CN104899280A (en) | Fuzzy-correlated asynchronous image retrieval method based on color histogram and NSCT (Non-Subsampled Contourlet Transform) | |
CN107563334A (en) | Based on the face identification method for differentiating linear expression retaining projection | |
CN102768732A (en) | Face recognition method integrating sparse preserving mapping and multi-class property Bagging | |
CN106250918A (en) | A kind of mixed Gauss model matching process based on the soil-shifting distance improved | |
CN107391594A (en) | A kind of image search method based on the sequence of iteration vision | |
CN104915400A (en) | Fuzzy correlation synchronized image retrieval method based on color histogram and non-subsampled contourlet transform (NSCT) | |
CN106650769A (en) | Linear representation multi-view discrimination dictionary learning-based classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |