Summary of the invention
Technical matters to be solved by this invention is: in prior art face identification method cannot effectively tackle in facial image simultaneously block, camouflage and illumination variation, for such defect, a kind of face identification method of low-rank piecemeal rarefaction representation is provided, at facial image, occur blocking, under the complex situations such as camouflage, illumination variation to improve precision and the robustness of recognition of face.
The present invention is for solving the problems of the technologies described above by the following technical solutions:
A face identification method for low-rank piecemeal rarefaction representation, concrete steps are as follows:
Step 1, for each main body in face database, random its part of selecting is as training image, another part, as test pattern, is integrated respectively formation initial training data matrix and test matrix by the training image of all main bodys and test pattern;
Step 2, training data matrix D is resolved into A+E, wherein A represents low-rank decomposition matrix, and the sparse error after E representative is decomposed, by minimizing the order of low-rank decomposition matrix A, reduces zero norm simultaneously || E||
0the value best low-rank that reaches training data matrix D approach, the formula of low-rank decomposition matrix decomposition:
In formula (1), nuclear norm || A||
*the approximate value of low-rank decomposition matrix A order, a norm || E||
1zero norm || E||
0substitution value, λ is parameter;
Step 3, introducing reference items, according to the formula of low-rank decomposition matrix decomposition (1), set up objective function:
In formula (2), i=1,2 ..., c, c is the number of class in training data matrix, D
ibe i training data matrix, A
ibe i low-rank decomposition matrix, E
ibe i sparse error matrix, Ψ (A
1, A
2..., A
c) for improving the reference items of low-rank decomposition matrix separating capacity, parameter lambda
1positive weight coefficient, parameter lambda
2for constant and λ
2>=0;
Step 4, objective function is carried out to abbreviation, by final arrangement of objective function, is:
Wherein
d becomes the line number after column vector for people's face image conversion, m
irepresent the mean vector of i main body, n
irepresent the number of sample in i main body, || ||
frepresentative is f norm;
Step 5, utilize augmentation Lagrange multiplication to process formula (10):
Wherein, A
i'=A
i-M
i, Y
irepresent i the corresponding Lagrange's multiplier of main body, object is by iteration, to ask for the extreme value of function, and μ represents a positive number parameter;
Step 6, for formula (12) by iterative A, low-rank decomposition matrix A is upgraded:
In formula
ε=(2 λ
2+ μ
k)
-1, k represents iterations,
represent the value of i sparse error matrix after k iteration, Y
i krepresent the value of i the corresponding Lagrange's multiplier of main body after k iteration, μ
krepresent the value of positive number parameter after k iteration, by regulating Y
i kvalue control iterative process, SVD represents Singular Value Decomposition Using, wherein U, V are unitary matrix, S is diagonal matrix;
Step 7, error matrix E is upgraded:
In formula
Step 8, the A application discrete cosine transform to process low-rank decomposition matrix decomposition, realize unitary of illumination and process;
Step 9, training image is carried out to overlap partition;
Step 10, contrast step 2, to the partitioned mode of step 9 training image, are also correspondingly carried out piecemeal processing by test picture;
Step 11, the face recognition algorithms of utilization based on rarefaction representation, in conjunction with l
1norm minimum, solves the sparse coefficient of module:
In formula, x is sparse coefficient, x
prepresent p the corresponding sparse coefficient of module, y
pthe column vector that p module of representative test picture forms;
Step 12, utilize the sparse coefficient of disparate modules, the sparse coefficient of disparate modules same class is combined, again the sparse coefficient of same class is sorted from big to small, for a training data matrix that has c main body, each class has the training set of N training sample, and its corresponding sparse coefficient of p module of test picture is:
x
p∈R
cN×1,1≤p≤12
v
pj=||δ
j(x
p)||
1
V
pjthe corresponding of a sort sparse coefficient absolute value sum of representative test j main body of p module of picture, δ
j(x
p) represent at x
pin with j the related fundamental function of main body, the sum that n is training sample, R
cN * 1the column vector of a cN dimension, 1≤p≤12;
Therefore:
v
p=[v
p1,vp
2,...,v
pc]
T
The sparse coefficient of modules has been connected:
F ∈ R
c * 1, R
c * 1be the column vector of a c dimension, by solving peaked coordinate in f, determine which main body test picture belongs to, thereby realize correct classification and identification.
As present invention further optimization scheme, the detailed process of described
step 9 is: through the low-rank decomposition matrix A=[a of low-rank processing and unitary of illumination
1, a
2... a
t..., a
n], a wherein
trepresent the proper vector of the t training picture, 1≤t≤n, by each training picture overlapping be divided into 12, A is also divided into
a
prepresent the data matrix that p piece of all training pictures forms, 1≤p≤12.
As present invention further optimization scheme, the detailed process of described step 4 is as follows: according to the criterion of LDA, definition mean vector is as follows:
1. the mean vector of Different categories of samples:
2. the population mean of sample is vectorial:
3. the within class scatter matrix of sample:
4. scatter matrix between the class of sample:
In formula (3), (4), (5), (6), the sum that n is training sample, n=n
1+ n
2+ ... + n
c, a represents A
iin column vector,
According to the characteristic of discrete matrix in discrete matrix and class between the principle of LDA and sample class, with reference to item, be defined as:
Ψ(A
1,A
2,....,A
c)=tr(S
w)-tr(S
b) (7)
Wherein tr () asks matrix trace, S
wfor the within class scatter matrix of sample, S
bfor scatter matrix between the class of sample; By formula (7) substitution formula (2), can obtain:
Objective function is further converted to:
Wherein:
M
jrepresent j the corresponding mean vector of main body, n
i, n
jrepresent respectively i, the number of samples of j main body;
Due to i main body A
iwhile asking low-rank decomposition, other main body A
qfixing, i ≠ q, therefore
Be a constant, order
Therefore formula (9) can turn to:
Due to λ
2ζ is constant, and objective function finally arranges:
The present invention adopts above technical scheme compared with prior art, has following technique effect: adopt low-rank matrix decomposition, effectively removed camouflage in facial image, covered up.The introducing of reference items, has increased the non-correlation between class and class in low-rank matrix greatly, is more conducive to test pattern Classification and Identification.The introducing of DCT algorithm, has realized the normalization of image, has effectively solved the even problem of uneven illumination in facial image.At sorting phase, utilize the thought of cluster, improved recognition speed effectively.This algorithm is carried out to many experiments on standard faces database, and experimental result shows: compare with existing face recognition algorithms, the recognition accuracy of algorithm has all obtained consistent raising with counting yield herein.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail:
First select the database that will test, for example AR face database.AR database comprises 126 main bodys, totally 4000 width people face pictures.In experiment, we select 50 main bodys from male sex's picture, and from each main body, random 20 width of selecting form training matrix as training picture, and other 6 width form test matrix as test picture.
Training matrix is made to low-rank matrix decomposition, non coherent new low-rank algorithm between class and class in the raising matrix proposing in application the present invention.
The final objective function of algorithm is expressed as:
To above-mentioned formula, adopt augmentation Lagrange multiplication ALM, by the corresponding A of each human subject of solution by iterative method
i, E
ias shown in the figure, wherein Fig. 2 (a) is the training matrix A removing after blocking, and it has more than the initial training matrix D shown in Fig. 2 (b) ability of representative, more be conducive to recognition of face, Fig. 2 (c) is error image schematic diagram during low-rank matrix recovers.Low-rank matrix A is further processed, utilize dct transform to do unitary of illumination to each training image of A, effect is illustrated as shown in Figure 1.
Training matrix A is carried out to overlap partition.A given data set A=[a who is formed by c main body n training picture
1, a
2..., a
n], wherein at represents the proper vector of t training picture.What each training picture was superimposed is divided into 12, as shown in Figure 4.Accordingly, A is also divided into
from formula, can find out A
pl row be to represent l proper vector of training p module of picture.
By same partitioned mode, test picture is also correspondingly divided into 12 module y
1, y
2..., y
12.
At classified part, as shown in Figure 5, there is n main body for one, each main body has the training set of N training sample, and its corresponding sparse coefficient of p module of test picture is x
p∈ R
nN * 1(1≤p≤12).V wherein
pjthe corresponding of a sort sparse coefficient absolute value sum of representative test j main body of p module of picture, that is:
δ wherein
j(x
p) represent at x
pin with j the related fundamental function of class, therefore
v
p=[v
p1,v
p2,...,v
pn]
T。The sparse coefficient of modules has been connected:
F ∈ R wherein
n * 1, by solving peaked coordinate in f (also the element in vector f being sorted), can determine which class test picture belongs to, thereby realize correct classification and identification.
Fig. 3 has summarized whole flow process of the present invention, and concrete steps are as follows:
1. build training and test matrix: the face database that first selection is tested, for each main body in database, the random part of selecting is as training image, another part, as test pattern, combines respectively formation initial training data matrix D and test matrix by the training image of all main bodys and test pattern.
2. low-rank matrix decomposition: low-rank matrix decomposition is that training data matrix D is resolved into A+E, and wherein A represents low-rank matrix,
Sparse error after E representative is decomposed.For given input training data matrix D, low-rank matrix decomposition, by minimizing the order of matrix A, reduces simultaneously || E||
0the value best low-rank that reaches input data matrix D approach.But the problems referred to above are NP-hard problems, by solving following formula, make traditional low-rank matrix recover to become to be easy to process:
In formula (1), nuclear norm || A||
*the approximate of matrix A order, zero norm || E||
0by a norm || E||
1substitute.
3. the introducing of reference items: in the time of in low-rank matrix decomposition being applied to a face identification system that has a N main body, we can collect corresponding training data matrix D=[D
1, D
2..., D
n], D wherein
irepresent the data matrix of main body i, comprise blocking of facial image, camouflage etc.When carrying out low-rank matrix decomposition, input data matrix D=[D
1, D
2..., D
n] will resolve into low-rank matrix A=[A
1, A
2..., A
n] and error matrix E=[E
1, E
2..., E
n].Although low-rank matrix A has the stronger ability that represents than training data matrix D, people's face picture of different subjects has some common features, and as position of eyes, nose etc., this also may cause low-rank matrix A can not comprise enough differentiation information.The present invention determines to improve the non-correlation between low-rank matrix, keeps the independence between different low-rank matrixes as far as possible.According to the formula of low-rank matrix decomposition (1), new objective function is expressed as follows:
In formula (2), i=1,2 ..., c, c is the number of main body in training data matrix.D
ibe i training sample data matrix, A
ifor corresponding low-rank decomposition matrix, E
ifor sparse error matrix.Ψ (A
1, A
2..., A
c) for improving the reference items of low-rank matrix separating capacity, parameter lambda
1positive weight coefficient, parameter lambda
2for constant and λ
2>=0;
4. the abbreviation of objective function: reference items Ψ (A
1, A
2..., A
c) design be not only conducive to dictionary learning, can also improve as much as possible the resolving ability of dictionary.The object of linear discriminant analysis LDA is from high-dimensional feature space, to extract the low dimensional feature most with discriminating power, and these features can help other all samples of same class to flock together, and different classes of sample as far as possible separately.According to the criterion of LDA, first define some mean vectors as follows:
The mean vector of Different categories of samples:
The population mean vector of sample:
The within class scatter matrix of sample:
Scatter matrix between the class of sample:
In formula (3) (4) (5) (6), ni is the number of sample in i main body, the total n=n that n is training sample
1+ n
2+ ... + n
c, A
ibe the training data matrix of i main body, a represents A
iin column vector.
According to the characteristic of discrete matrix in discrete matrix and class between the principle of LDA and sample class, with reference to item, be defined as:
Ψ(A
1,A
2,....,A
c)=tr(S
w)-tr(S
b) (7)
Wherein tr () asks matrix trace.By formula (7) substitution formula (2), can obtain:
Because algorithm is that the raw data matrix of different subjects is asked respectively to low-rank matrix decomposition, therefore for fear of the direct solution to formula (8), objective function can turn to:
Due to when i main body A i asked to low-rank decomposition, other main body A q (i ≠ q) is fixing, therefore
Be a constant, order
d becomes the line number after column vector for people's face image conversion.Therefore formula (9) can turn to:
Due to λ
2ζ is constant, and objective function finally arranges:
5. the algorithm optimization based on ALM: augmentation Lagrange multiplication (ALM) is widely used in solving as a kind of algorithm of standard the problem that low-rank matrix recovers.Solving the minimum value while constraint condition of f (X)
During the optimization problem of h (X)=0, the objective function of ALM is defined as:
In formula, Y represents Lagrange's multiplier, and u represents a positive number parameter.According to formula (11), order
h(X)=D
i-A
i-E
i
Formula (10) is utilizing the formula of ALM Algorithm for Solving to be expressed as follows: make A
i '=A
i-M
i
6. the renewal of low-rank matrix A: for formula (12), by iterative A.
In formula
ε=(2 λ 2+ μ k)-1, k represents iterations.By Singular Value Decomposition Using, U wherein, V is unitary matrix, S is diagonal matrix,
7. the renewal of error matrix E:
In formula
Work as A
i, E
iall iteration is out time, and low-rank matrix decomposition also just completes substantially.
8.DCT unitary of illumination: for the A application discrete cosine transform (DCT) through low-rank matrix decomposition, can effectively process facial image uneven illumination, can reach again the object of unitary of illumination.
9. facial image overlap partition: process and the low-rank training matrix A=[a of unitary of illumination through low-rank
1, a
2... a
t..., a
n], a wherein
trepresent t training picture proper vector, by each training picture overlapping be divided into 12.Accordingly, A is also divided into
a
p(1≤p≤12) represent the data matrix that p piece of all training pictures forms, and doubling of the image piecemeal as shown in Figure 4.
10. test pattern piecemeal: press the partitioned mode of training image, test pattern is also correspondingly divided into 12 module y
1, y
2..., y
12.
Solving of the sparse coefficient of 11. module: John Wright proposes a kind of face recognition algorithms (SRC) based on rarefaction representation.SRC thinks that each test picture is the linear combination of training pictures, can minimize to solve sparse coding problem wherein by l1.The theory relevant according to SRC can obtain, for any one test module y
p, have:
y
p=A
px
p (13)
Because this is a under determined system, formula (13) has a lot of solutions.In order to obtain a unique stable solution, need to add a constraint condition.Because l
2the fairly simple convenience of Norm minimumization, becomes the most general constraint condition.Yet the sparse coefficients comparison that it produces is dense, to test pattern, identification does not have too large benefit.In order to obtain a more satisfactory sparse solution, we have utilized l
1norm minimum.Its optimization problem:
X in formula
prepresent p the corresponding sparse coefficient of module.
12.pooling sorting algorithm: in the recognition of face based on rarefaction representation (SRC), be all generally by solving residual error r
psize judge which training sample test picture belongs to, thereby realize correctly classification:
For fear of separately module being processed, ignore the correlativity between module, the present invention makes full use of the sparse coefficient of disparate modules, use for reference the thought of alignment pooling, first by the sparse coefficient pooling of disparate modules same class together, then to the sparse coefficient of same class sort from big to small (alignment), the relevant information between module can be utilized like this, module that weight ratio the is higher importance in assorting process can be given prominence to again.Have c main body for one, each main body has the training set of N training sample, and its corresponding sparse coefficient of p module of test picture is x
p∈ R
nN * 1(1≤p≤12).V wherein
pjthe corresponding of a sort sparse coefficient absolute value sum of representative test j main body of p module of picture, that is:
v
pj=||δ
j(x
p)||
1
δ wherein
j(x
p) represent at x
pin with j the related fundamental function of class, so v
p=[v
p1, v
p2..., v
pn]
t.Utilize the thought of pooling, the sparse coefficient of modules connected:
F ∈ R wherein
n * 1, by solving peaked coordinate in f (also the element in vector f being carried out to alignment), can determine which class test picture belongs to, thereby realize correct classification and identification.