A kind of face identification method based on joint sparse model and sparse holding mapping
Technical field
The invention belongs to biological identification technology, mode identification technology are and in particular to sparse holding mapping and joint are dilute
Thin model method.
Background technology
With social development, the requirement to fast and effectively auto authentication for the every field is increasingly urgent, identification
And checking has important using value in national security, public safety and military security field.Biometrics identification technology by
In it, there is very strong self stability and individual difference, have become as the optimal foundation of field of biological recognition.
In recent years, recognition of face developed rapidly in the world as computer security technique, recognition of face
Technology increasingly causes extensive concern.Face recognition technology application background is quite varied, can be used for public security system criminal investigation and breaks
Criminal's identification of case, the monitoring of certificate verification, bank and customs of customs, airport and secret department etc., automatic entrance guard system
The aspects such as system, video conference.This many application make face Study of recognition become one extremely meaningful and be rich in challenging problem.
Because it is with a wide range of applications, face recognition technology becomes the research heat of computer vision field in recent years
One of point, through development for many years, face recognition technology achieves huge progress, and previous researchers have been proposed for one
The face identification method of series, such as principal component analysis(PCA), independent component analysis(ICA), linear discriminant analysis(LDA)Deng.
Introduce the method for rarefaction representation it is proposed that sorting algorithm based on rarefaction representation in Wright et al. field of face identification again
(SRC).This algorithm is as dictionary all of training facial image(Each facial image is an atom).Test face figure
As the process of rarefaction representation is exactly to find its linear combination on these dictionary element.
Have been proposed that at present based on joint sparse model(Joint Sparsity models, JSM)Face know
Other method, but there is a problem of that discrimination is low.
Content of the invention
Present invention aims to the low problem of the discrimination of prior art presence, there is provided a kind of dilute based on joint
Thin model and the face identification method of sparse holding mapping.
The present invention is achieved by the following technical solutions:
Face identification method based on joint sparse model and sparse holding mapping is it is characterised in that schemed with all of training
Replace the conventional random matrix in JSM algorithm as the conversion base of composition, extract every class using JSM algorithm and train facial image
Publicly-owned part and privately owned part(Image in face database is pressed people and is classified, and all images of same person are classified as a class, publicly-owned part
Illustrate the face characteristic that every class facial image has, privately owned part illustrates the details such as the different expression of face, illumination and becomes
Change), approach original training image using the image that sparse publicly-owned and privately owned partial reconfiguration goes out, minimum by solving reconstructed error
Optimization problem, try to achieve dimensionality reduction matrix.The dimensionality reduction matrix finally tried to achieve carries out dimension-reduction treatment to test image, and with after dimensionality reduction
The publicly-owned and privately owned of each class of training image reconstruct test image, reconstructed error minimum be determined as test image institute
Belong to classification.It comprises the following steps that:
Step one, pretreatment
1.1)All images in face database are normalized, normalized operation refers to first unified for image big
Little is 32 × 32 gray scale value matrixs, then each row order of gray scale value matrix is placed on the back of first row, obtaining size is
1024 × 1 gray scale value matrix, on the basis of all operations hereafter are all gradation of image value matrix after normalization;Face
Storehouse includes the different facial expression images of different people, and the different images of same person are classified as a class, takes several works in each class
For training image, remaining image is as test image;
Step 2, the JSM feature extraction based on training sample set
Extract each class training figure using the joint sparse model JSM algorithm of the conversion base forming based on all training images
As the publicly-owned part in spatial domainWith privately owned part and, specific as follows:
2.1)Calculate feature set W that kth class trains imagek
WkRepresent the publicly-owned part of kth class and the set of privately owned part, WkIt is a matrix, the inside contains the public affairs of kth class
There is the privately owned part with each of kth class training image, publicly-owned part illustrates the facial image somebody altogether of same person
Face feature, the variations in detail such as the privately owned part expression different with the face illustrating same person, illumination.
Circular:First the training image of all classes is carried out pre- dimensionality reduction with principal component analysis PCA algorithm, obtain
Training image collection g after dimensionality reductionk, wherein,gk,jRepresent j-th facial image in k-th people
PCA feature, JkIt is the training image number in each class.Then image set g after dimensionality reductionkInput as JSM algorithm obtains
Wk, JSM algorithm is as follows:
Wherein,
A=[A1,A2,...Ai...,AM], M is the total number of all training images of all classes, and all of training image is compiled
Number for 1 arrive M, AiIt is i-th training image gray scale value matrix after PCA dimensionality reduction, for representing training image after dimensionality reduction, T represents
Transposition computing, obtains Wk;
2.2)Obtain the publicly-owned part of kth class training image by solving the sparse minimum one normal form problem keeping in mappingPrivately owned part with kth j-th training image of classWherein WkRarefaction representation is as follows:
Wk=argmin||Wk||1
Optimum solutionJkRepresent the number of kth class training image,Represent kth class instruction
Practice the publicly-owned part of image,Represent the privately owned part of j-th training image in kth class;
2.3)According to 2.2)Middle extractionCalculate publicly-owned part in spatial domain for the kth class training image
Sum with privately owned partIt is calculated as follows:
WhereinA is step 2.1)Defined in A,Represent j-th training figure in the kth class tried to achieve
As the privately owned part in spatial domain.
Step 2.2)In be by solve a minimum one normal form problem obtain WkRarefaction representation, so, try to achieve
'sWithIt is all the expression on sparse domain, so the public affairs trying to achieve facial image in spatial domain will be brought by an inversion
Have and privately owned part.
Step 3, calculating dimensionality reduction matrix
Approach original training figure using the image that the publicly-owned and privately owned partial reconfiguration of the rarefaction representation extracting in step 2 goes out
As so that this reconstructed error reaches minimum, by solving-optimizing problem, try to achieve dimensionality reduction matrix, described calculating dimensionality reduction matrix
Step is as follows:
3.1)According to joint sparse model, using public affairs in spatial domain for each class training image extracting in step 2
There is partWith privately owned part andAll training images of all classes are reconstructed, the kth class later to PCA dimensionality reduction
The gray scale value matrix t of j-th training imagekjIt is reconstructed and obtain reconstructed image matrixMethod as follows:
Wherein,It is step 2.3)In the kth class training image the obtained publicly-owned part in spatial domain, calculated according to JSM
Method, fkIt isWith the privately owned part of kth class andUsing step 2.1)In the publicly-owned part tried to achieve of JSM algorithm, computational methods
As follows:
rkj=Ψ′Wkj
Wherein,Represent training image matrix tkjDeduct the publicly-owned part of its generic
Ψ '=[B ', C '], B '=[A, A]T,A=[A1,A2,...Ai...,AM], M is all training images of all classes
Total number, all of training image number be 1 arrive M, AiIt is i-th later training image gray scale value matrix of PCA dimensionality reduction, T
Represent transposition computing, obtain
3.2)Using joint sparse model and step 3.1)In the reconstructed image matrix obtainedSolve dimensionality reduction matrix.Protect
The reconstruct image of the card training all classes of image is as approaching original training signal as far as possible it is possible to solve by solving following formula
Dimensionality reduction matrix:
In order to derive conveniently, here formula once be deformed, replace with i k to represent the mark of classification, xijReplace
tkjRepresent j-th training image matrix of the i-th class,ReplaceRepresent to xijImage array after reconstruct.Then according to 3.1)In
Reconstructing method,It is the publicly-owned part of the i-th class,Replace step 3.1)Middle fkTo represent it is
Publicly-owned part after j-th training image reconstruct of the i-th class, then have
Wherein A is the set of all training ganmma controller value matrixs, JiIt is the number of training sample in the i-th class, K represents total
Class number, w represents dimensionality reduction matrix finally to be solved.
Above formula passes through simple conversion and can obtain
If eijIt is the i-th class training corresponding label vector of image, size is M × 1, and M is the number of all training samples, its
(i-1) Ji+ j element is 1, and other is 0, then above formula is simply derived as follows:
Wherein,
Respective items are equal.
θcIt is made up of the publicly-owned part of each class, its each row
The column vector form of the publicly-owned part of each training image generic corresponding,
WhereinAfter expression is j-th training image reconstruct of the i-th class
Publicly-owned part,θclIt is made up of the publicly-owned part of each class, the public affairs of the corresponding each class of every a line
There is part.Derivation is as follows,
Using above formula:
Following derivation can be carried out using the sparse method keeping mapping, limit WTAATW=1, then object function can turn
Turn to following optimal problem:
In order to simplify expression, this minimum problems can be converted into equal max problem, that is,:
May finally derive that w is the feature corresponding to d eigenvalue of maximum of the generalized eigenvalue problem shown in above formula
Vector:
AθβATw=λAATw
Whereinλ is the characteristic value of equation(In program
Calculated), profit can obtain required dimensionality reduction matrix in this way.
Step 4, Classification and Identification
Using the dimensionality reduction matrix tried to achieve, the publicly-owned and privately owned characteristic image of test image and each class is all used dimensionality reduction matrix
Carry out dimensionality reduction, then reconstruct test sample using the publicly-owned and privately owned feature that each class is extracted, reconstructed error minimum
Class is then the classification belonging to this test sample.The step of described Classification and Identification is as follows:
4.1) using the dimensionality reduction matrix w trying to achieve, to test image y and step 2.3)In each class that obtains train image
Publicly-owned part and privately owned part and all carry out dimensionality reduction with dimensionality reduction matrix w, test image y obtains after w dimensionality reductionKth class is trained
The publicly-owned part of imageObtain after w dimensionality reductionKth class training the privately owned part of image andObtain after w dimensionality reduction
It is calculated as follows:
4.2) utilize step 4.1)Obtain the sum of the publicly-owned part after each class trains image dimensionality reduction and privately owned part, respectively
To test image after dimensionality reductionIt is reconstructed, obtain reconstructing test image matrix stack K table
Show the sum of class,Represent test imageThe i-th class after dimensionality reduction trains the publicly-owned part of imageWith the private after dimensionality reduction
Have part andThe reconstructed image calculating, that is,Wherein fiIt is test image after dimensionality reductionThe publicly-owned portion with the i-th class
PointDifference and dimensionality reduction after the privately owned part of the i-th class andUsing step 2.1) in the publicly-owned portion tried to achieve of JSM algorithm
Point, computational methods are as follows:
ri=Ψ′Wi
Wherein, By step 4.1)Obtain, Ψ '=[B ', C '], W is step 4.1) in the dimensionality reduction matrix tried to achieve, A=[A1,A2,...Ai...,AM], M is all
The total number of all training images of class, all of training image number be 1 arrive M, AiIt is i-th later training of PCA dimensionality reduction
Gradation of image value matrix, T represents transposition computing, obtains Wi, wherein,
4.3) utilizeWithCalculateThe reconstructed error l of corresponding each class, that is,:
Test image y is classified as the minimum affiliated class of l.
Beneficial effect
Based on the face identification method of joint sparse model and sparse holding mapping, not only there is occupancy memory space little
Feature, and simultaneously taken account of the correlation and between class in class, improve the accuracy rate of identification.
Brief description
Fig. 1 is the flow chart of method therefor of the present invention.
Fig. 2 is the JSM feature extraction figure based on training sample set.
Specific embodiment
Fig. 1 is the flow process based on joint sparse model and the face identification method of sparse holding mapping proposed by the present invention
Figure.Whole flow process is divided into training module and identification module, and training module mainly pre-processes to training image, then extracts
The publicly-owned part of every class training image and and form privately owned part, by reconstructing training sample so as to and original training sample
Error reach minimum of computation and go out dimensionality reduction matrix;Identification module is that unknown test image is pre-processed, then with each
The publicly-owned and privately owned feature of class is reconstructed to it, and the minimum class of reconstructed error is exactly the classification belonging to test image.
In conjunction with Fig. 1, the implementation process of the present invention is described in detail.Embodiments of the invention are with the technology of the present invention
Implemented premised on scheme, given detailed embodiment and specific operating process, but protection scope of the present invention
It is not limited to following embodiments.
Embodiment employs a publicly-owned face database, Yale face database.Yale face database comprises 15 people, everyone
11 pictures, the main change including illumination condition and expression.15 people are expressed as 15 classes, each class has 11 face figures
Picture.For each class facial image in experiment, randomly select 5 facial images as training image, remaining as test chart
Picture.Training image totally 75 therefore in face database.
Provide the explanation of each detailed problem in this inventive technique scheme involved in detail below:
Step one, pretreatment
1.1)Whole 165 images in face database are normalized, compress image to first 32 × 32 as
Plain size, is then placed on the back of first row each row order of gray scale value matrix, obtains the gray value that size is 1024 × 1
On the basis of matrix all operations hereafter are all gradation of image value matrix after normalization.
Step 2, the JSM feature extraction (Fig. 2) based on training sample set
It is to substitute into extract feature in JSM algorithm with the conversion base of all training images composition in this step.
2.1)First the training image of all classes is carried out pre- dimensionality reduction with principal component analysis PCA algorithm, after setting dimensionality reduction
Intrinsic dimensionality is 75, obtains training image collection g after dimensionality reductionk, wherein,, gk,jRepresent k-th people
In j-th facial image PCA feature, JkRepresent the number of the training image in each class, k=1,2 ..., 15, j=1,
2 ..., 5, then image set g after dimensionality reductionkInput as JSM algorithm obtains Wk, JSM algorithm is as follows:
Wherein, A=[A1,
A2,...Ai...,AM], M is the total number of all training images of all classes, M=75, and it is 1 to M that all of training image is numbered,
AiIt is i-th later training image gray scale value matrix of PCA dimensionality reduction, training image sum is 75, and each training image passes through
Dimension after PCA extracts feature is 75, so the size of A is 75 × 75, T represents transposition computing, obtains Wk;
2.2)Obtain the publicly-owned part of kth class training image by solving the sparse minimum one normal form problem keeping in mappingPrivately owned part with kth j-th training image of classWherein WkRarefaction representation is as follows:
Wk=argmin||Wk||1
Optimum solutionJkRepresent the number of kth class training image,Represent the training of kth class
The publicly-owned part of image,Represent the privately owned part of j-th training image in kth class;
2.3)According to 2.2)Middle extractionCalculate publicly-owned part in spatial domain for the kth class training image
Sum with privately owned partIt is calculated as follows:
WhereinA is step 2.1)Defined in A,Represent j-th training figure in the kth class tried to achieve
As the privately owned part in spatial domain.
Step 3, calculating dimensionality reduction matrix
3.1)According to joint sparse model, using public affairs in spatial domain for each class training image extracting in step 2
There is partWith privately owned part andAll training images of all classes are reconstructed, the kth class later to PCA dimensionality reduction
The gray scale value matrix t of j-th training imagekjIt is reconstructed and obtain reconstructed image matrixMethod as follows:
Wherein,It is step 2.3)In the kth class training image the obtained publicly-owned part in spatial domain, calculated according to JSM
Method, fkIt isWith the privately owned part of kth class andUsing step 2.1)In the publicly-owned part tried to achieve of JSM algorithm, computational methods
As follows:
rkj=Ψ′Wkj
Wherein, Represent training image matrix tkjDeduct the publicly-owned part of its generic
Ψ '=[B ', C '], B '=[A, A]T,A=[A1,A2,...Ai...,AM], M is all training images of all classes
Total number, all of training image number be 1 arrive M, AiIt is i-th later training image gray scale value matrix of PCA dimensionality reduction, T
Represent transposition computing, obtain
3.2)Using joint sparse model and step 3.1)In the reconstructed image matrix obtainedSolve dimensionality reduction matrix.Protect
The reconstruct image of the card training all classes of image is as approaching original training signal as far as possible it is possible to solve by solving following formula
Dimensionality reduction matrix:
In order to derive conveniently, here formula once be deformed, replace with i k to represent the mark of classification, xijReplace
tkjRepresent j-th training image matrix of the i-th class,ReplaceRepresent to xijImage array after reconstruct.Then according to 3.1)In
Reconstructing method, It is the publicly-owned part of the i-th class,Replace step 3.1)Middle fkIt is i-th to represent
Publicly-owned part after j-th training image reconstruct of class, then have
Wherein A is the set of all training ganmma controller value matrixs, JiIt is the number of training sample in the i-th class, K represents total
Class number, w represents dimensionality reduction matrix finally to be solved.
Above formula passes through simple conversion and can obtain
If eijIt is the i-th class training corresponding label vector of image, size is M × 1, and M is the number of all training samples, its
(i-1) Ji+ j element is 1, and other is 0, then above formula is simply derived as follows:
Following derivation can be carried out using the sparse method keeping mapping, limit WTAATW=1, then object function can turn
Turn to following optimal problem:
In order to simplify expression, this minimum problems can be converted into equal max problem, that is,:
May finally derive that w is the feature corresponding to d eigenvalue of maximum of the generalized eigenvalue problem shown in above formula
Vector:
AθβATW=λAATW
Wherein θc
It is made up of the publicly-owned part of each class, its each row corresponds to the column vector of the publicly-owned part of each training image generic
Form,WhereinExpression is the public affairs after j-th training image reconstruct of the i-th class
There is part,θclIt is made up of the publicly-owned part of each class, the publicly-owned part of the corresponding each class of every a line.The λ side of expression
The characteristic value of journey.In program is realized, above-mentioned variable can be substituted into successively and required dimensionality reduction matrix in formula, can be obtained.
Step 4, Classification and Identification
4.1) choose test image y, using the dimensionality reduction matrix w trying to achieve, to test image y and step 2.3)In obtain
Each class train the publicly-owned part of imageWith privately owned part andAll carry out dimensionality reduction with dimensionality reduction matrix w, test image y is through w
Obtain after dimensionality reductionKth class trains the publicly-owned part of imageObtain after w dimensionality reductionKth class trains the privately owned part of image
WithObtain after w dimensionality reductionIt is calculated as follows:
4.2) utilize step 4.1)Obtain the sum of the publicly-owned part after each class trains image dimensionality reduction and privately owned part, respectively
To test image after dimensionality reductionIt is reconstructed, obtain reconstructing test image matrix stack K table
Show the sum of class,Represent test imageThe i-th class after dimensionality reduction trains the publicly-owned part of imageWith the private after dimensionality reduction
Have part andThe reconstructed image calculating, that is,Wherein fiIt is test image after dimensionality reductionThe publicly-owned portion with the i-th class
PointDifference and dimensionality reduction after the privately owned part of the i-th class andUsing step 2.1) in the publicly-owned portion tried to achieve of JSM algorithm
Point, computational methods are as follows:
ri=Ψ′Wi
Wherein,Ψ '=[B ', C '], W is step
The dimensionality reduction matrix tried to achieve in rapid 4.1), A=[A1,A2,...Ai...,AM], M is the total number of all training images of all classes,
All of training image number be 1 arrive M, AiIt is i-th later training image gray scale value matrix of PCA dimensionality reduction, T represents that transposition is transported
Calculate, obtain Wi, wherein,
4.3) utilizeWithCalculateThe reconstructed error l of corresponding each class, that is,:
Test image y is classified as the minimum affiliated class of l.
The experimental result of the explanation present invention is explained in detail below:
Yale and CMU-AMP face database is selected in experiment.Yale face database comprises 15 volunteers, and every volunteer has 11
Pictures, totally 165 pictures, the main change including illumination condition and expression.In experiment, each image is all normalized to 32
× 32 pixel sizes.For each class facial image, randomly select the different image of 5 width as training sample, remaining conduct
Test sample, carries out 5 experiments and averages.Experimental result is as shown in table 1.Randomly choose training sample and many experiments guarantee
The stability of experimental result.
Table 1 discrimination compares
CMU-AMP face database comprises 13 volunteers, and every volunteer has 75 pictures, totally 975 pictures, main inclusion
Volunteer is glad, angry, and surprised grade is expressed one's feelings.In experiment, each image is all normalized to 32 × 32 pixel sizes.For each
Class facial image, randomly selects the different image of 5 width as training sample, remaining as test sample, carry out 5 experiments and take
Mean value.Experimental result is as shown in table 2.
Table 2 discrimination compares
Dimension in table 1 represents the dimension being dropped to test image with the dimensionality reduction matrix tried to achieve.In table, the method for JSM is base
In the method for joint sparse model, the inventive method is the proposed method calculating dimensionality reduction matrix, by upper figure in Yale
Can be seen that the method that method presented herein is superior to JSM in low dimensional with the experimental result on CMU-AMP storehouse.
Due to introducing the sparse thought keeping mapping in joint sparse model, with the conversion of all training samples composition
Publicly-owned and privately owned feature that base rarefaction representation extracts is it is contemplated that the correlation and between class in class.And combine sparse projection derivation
Gone out the computing formula of dimensionality reduction matrix it is ensured that the signal being gone out using the publicly-owned part of training sample and privately owned partial reconstitution as far as possible
Approach primary signal, the data projection in higher dimensional space originally, to lower dimensional space, is remained the main of original image simultaneously
Feature.Show really to improve discrimination by the experiment on face database.