CN103632138A - Low-rank partitioning sparse representation human face identifying method - Google Patents

Low-rank partitioning sparse representation human face identifying method Download PDF

Info

Publication number
CN103632138A
CN103632138A CN201310586448.1A CN201310586448A CN103632138A CN 103632138 A CN103632138 A CN 103632138A CN 201310586448 A CN201310586448 A CN 201310586448A CN 103632138 A CN103632138 A CN 103632138A
Authority
CN
China
Prior art keywords
matrix
lambda
low
rank
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310586448.1A
Other languages
Chinese (zh)
Other versions
CN103632138B (en
Inventor
胡昭华
赵孝磊
徐玉伟
何军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mingde Xinmin Sports Culture Co.,Ltd.
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201310586448.1A priority Critical patent/CN103632138B/en
Publication of CN103632138A publication Critical patent/CN103632138A/en
Application granted granted Critical
Publication of CN103632138B publication Critical patent/CN103632138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a low-rank partitioning sparse representation human face identifying method, which adopts low-rank matrix decomposition, introduces a reference item, adopts a DCT (discrete cosine transform) algorithm to realize the normalization of images and effectively solves the problem of uneven lighting in a human face image. At a classifying stage, a clustering thought is used, and the identifying speed is effectively improved. The algorithm is used on a standard human face database to perform multiple times of tests, and the test result shows that compared with the existing human face identifying method, the low-rank partitioning sparse representation human dace identifying method has the advantage that the identifying accuracy and computing efficiency of the algorithm are all consistently improved. The precision and stability on human face identifying are improved under the complex conditions, such as shielding, disguising and illumination varying, of the human face image.

Description

A kind of face identification method of low-rank piecemeal rarefaction representation
Technical field
The invention discloses a kind of face identification method of low-rank piecemeal rarefaction representation, relate to image and process and area of pattern recognition.
Background technology
Recognition of face is a popular research topic in computer vision field, it extensively adopts signature analysis algorithm, computer image processing technology and biostatistics principle have been merged in one, utilize computer image processing technology from video, to extract portrait unique point, utilize the principle analysis of biostatistics to set up mathematical model, there is vast potential for future development.For a kind of sane face recognition algorithms, need to effectively process many-sided challenges such as people's face in recognition of face blocks, camouflage, illumination variation, image drift.
Sparse coding is a kind of emerging face recognition algorithms.It is considered as the facial image of needs identification a linear combination of all training images under sparse constraint.Rarefaction representation coding utilizes L1 norm and L0 norm to reach gratifying accuracy of identification, and similarly algorithm also has linear regression classification.It is fine that these methods can show under some controlled condition, however when having continuity to block, pretend in test pattern or training image performance very poor.Therefore, modular approach is just applied to rarefaction representation coding and linear regression classification simultaneously.Yet modularization rarefaction representation coding and modularization linear regression classification have a common shortcoming: their each module is independent processing, has lost the related information between module and module.
In recognition of face field, illumination variation remains one and has challenging problem.The method of existing solution illumination variation problem has many.For example, histogram equalization, gamma correction, log-transformation etc., they are widely used in unitary of illumination.Yet these Global treatment technology are difficult to process the problem that in facial image, inhomogeneous illumination changes.Also have certain methods to attempt to extract facial characteristics invariant and deal with illumination variation, typical method has edge map, grayscale image expansion, Gabor filtering etc., but experimental study shows, when changing people's face illumination direction, the effect of these methods will be very poor.Illumination variation is mainly because different light and shade phenomenons appears in three-dimensional face model under different illumination directions.Recently, some researchists attempt to build a 3D faceform and solve the illumination compensation problem in image.But this disposal route based on faceform has a shortcoming: the three-dimensional information of the picture under a large amount of different illumination conditions need to obtain in the training stage, and this greatly reduces recognition speed.
Summary of the invention
Technical matters to be solved by this invention is: in prior art face identification method cannot effectively tackle in facial image simultaneously block, camouflage and illumination variation, for such defect, a kind of face identification method of low-rank piecemeal rarefaction representation is provided, at facial image, occur blocking, under the complex situations such as camouflage, illumination variation to improve precision and the robustness of recognition of face.
The present invention is for solving the problems of the technologies described above by the following technical solutions:
A face identification method for low-rank piecemeal rarefaction representation, concrete steps are as follows:
Step 1, for each main body in face database, random its part of selecting is as training image, another part, as test pattern, is integrated respectively formation initial training data matrix and test matrix by the training image of all main bodys and test pattern;
Step 2, training data matrix D is resolved into A+E, wherein A represents low-rank decomposition matrix, and the sparse error after E representative is decomposed, by minimizing the order of low-rank decomposition matrix A, reduces zero norm simultaneously || E|| 0the value best low-rank that reaches training data matrix D approach, the formula of low-rank decomposition matrix decomposition:
min A , E | | A | | * + λ | | E | | 1 s . t . D = A + E - - - ( 1 )
In formula (1), nuclear norm || A|| *the approximate value of low-rank decomposition matrix A order, a norm || E|| 1zero norm || E|| 0substitution value, λ is parameter;
Step 3, introducing reference items, according to the formula of low-rank decomposition matrix decomposition (1), set up objective function:
min A , E Σ i = 1 c { | | A i | | * + λ 1 | | E i | | 1 } + λ 2 Ψ ( A 1 , A 2 , . . . . , A c ) s . t . D i = A i + E i - - - ( 2 )
In formula (2), i=1,2 ..., c, c is the number of class in training data matrix, D ibe i training data matrix, A ibe i low-rank decomposition matrix, E ibe i sparse error matrix, Ψ (A 1, A 2..., A c) for improving the reference items of low-rank decomposition matrix separating capacity, parameter lambda 1positive weight coefficient, parameter lambda 2for constant and λ 2>=0;
Step 4, objective function is carried out to abbreviation, by final arrangement of objective function, is:
min A i , E i | | A i | | * + λ 1 | | E i | | 1 + λ 2 | | A i - m i | | F 2 s . t . D i = A i + E i - - - ( 10 )
Wherein
Figure BDA0000417933490000024
d becomes the line number after column vector for people's face image conversion, m irepresent the mean vector of i main body, n irepresent the number of sample in i main body, || || frepresentative is f norm;
Step 5, utilize augmentation Lagrange multiplication to process formula (10):
L ( A i , E i , Y i , &mu; , &lambda; 2 ) = | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i &prime; | | F 2 + < Y i , D i - A i - E i > + u 2 | | D i - A i - E i | | F 2 - - - ( 12 )
Wherein, A i'=A i-M i, Y irepresent i the corresponding Lagrange's multiplier of main body, object is by iteration, to ask for the extreme value of function, and μ represents a positive number parameter;
Step 6, for formula (12) by iterative A, low-rank decomposition matrix A is upgraded:
A i k + 1 = arg min A i L ( A i , E i k , Y i k &mu; k , &lambda; 2 ) = arg min A i | | A i | | * + &lambda; 2 | | A i &prime; | | F 2 + < Y i k , D i - A i - E i k > + &mu; k 2 | | D i - A i - E i k | | F 2 = arg min A i | | A i | | * + ( &lambda; 2 + &mu; k 2 ) < A i , A i &prime; > - &mu; k < D i - E i k + 1 &mu; k Y i k , A i &prime; > = arg min A i &epsiv; | | A i | | * + 1 2 | | X a - A i | | F 2
A i k + 1 = US &epsiv; V T where ( U , S , V T ) = SVD ( X a )
In formula &epsiv; &prime; = 1 &mu; k , X e = D i - A i k + 1 + 1 &mu; k Y i k ; ε=(2 λ 2+ μ k) -1, k represents iterations,
Figure BDA0000417933490000035
represent the value of i sparse error matrix after k iteration, Y i krepresent the value of i the corresponding Lagrange's multiplier of main body after k iteration, μ krepresent the value of positive number parameter after k iteration, by regulating Y i kvalue control iterative process, SVD represents Singular Value Decomposition Using, wherein U, V are unitary matrix, S is diagonal matrix;
Step 7, error matrix E is upgraded:
E i k + 1 = arg min E i L ( A i k + 1 , E i , Y i k , &mu; k , &lambda; 2 ) = arg min E i | | E i | | 1 + < Y i k , A i k + 1 + E i - D i > + &mu; k 2 | | A i k + 1 + E i - D i | | F 2 = arg min E i &epsiv; &prime; | | E i | | 1 + 1 2 | | X e - E i | | F 2
In formula E i k + 1 = arg min E i L ( A i k + 1 , E i , Y i k , &mu; k , &lambda; 2 ) = arg min E i | | E i | | 1 + < Y i k , A i k + 1 + E i - D i > + &mu; k 2 | | A i k + 1 + E i - D i | | F 2 = arg min E i &epsiv; &prime; | | E i | | 1 + 1 2 | | X e - E i | | F 2
Step 8, the A application discrete cosine transform to process low-rank decomposition matrix decomposition, realize unitary of illumination and process;
Step 9, training image is carried out to overlap partition;
Step 10, contrast step 2, to the partitioned mode of step 9 training image, are also correspondingly carried out piecemeal processing by test picture;
Step 11, the face recognition algorithms of utilization based on rarefaction representation, in conjunction with l 1norm minimum, solves the sparse coefficient of module:
x p = arg min x { | | A p x - y p | | 2 2 + &lambda; | | x | | 1 } for 1 &le; p &le; 12
In formula, x is sparse coefficient, x prepresent p the corresponding sparse coefficient of module, y pthe column vector that p module of representative test picture forms;
Step 12, utilize the sparse coefficient of disparate modules, the sparse coefficient of disparate modules same class is combined, again the sparse coefficient of same class is sorted from big to small, for a training data matrix that has c main body, each class has the training set of N training sample, and its corresponding sparse coefficient of p module of test picture is:
x p∈R cN×1,1≤p≤12
v pj=||δ j(x p)|| 1
V pjthe corresponding of a sort sparse coefficient absolute value sum of representative test j main body of p module of picture, δ j(x p) represent at x pin with j the related fundamental function of main body, the sum that n is training sample, R cN * 1the column vector of a cN dimension, 1≤p≤12;
Therefore:
v p=[v p1,vp 2,...,v pc] T
The sparse coefficient of modules has been connected:
f = 1 12 &Sigma; p = 1 12 v p
F ∈ R c * 1, R c * 1be the column vector of a c dimension, by solving peaked coordinate in f, determine which main body test picture belongs to, thereby realize correct classification and identification.
As present invention further optimization scheme, the detailed process of described step 9 is: through the low-rank decomposition matrix A=[a of low-rank processing and unitary of illumination 1, a 2... a t..., a n], a wherein trepresent the proper vector of the t training picture, 1≤t≤n, by each training picture overlapping be divided into 12, A is also divided into
Figure BDA0000417933490000051
a prepresent the data matrix that p piece of all training pictures forms, 1≤p≤12.
As present invention further optimization scheme, the detailed process of described step 4 is as follows: according to the criterion of LDA, definition mean vector is as follows:
1. the mean vector of Different categories of samples:
m i = 1 n i &Sigma; a &Element; A i a - - - ( 3 )
2. the population mean of sample is vectorial:
m = 1 n &Sigma; i = 1 c n i &CenterDot; m i - - - ( 4 )
3. the within class scatter matrix of sample:
S w = &Sigma; i = 1 c &Sigma; a &Element; A i ( a - m i ) ( a - m i ) T - - - ( 5 )
4. scatter matrix between the class of sample:
S b = &Sigma; j = 1 c n i ( m i - m ) ( m i - m ) T - - - ( 6 )
In formula (3), (4), (5), (6), the sum that n is training sample, n=n 1+ n 2+ ... + n c, a represents A iin column vector,
According to the characteristic of discrete matrix in discrete matrix and class between the principle of LDA and sample class, with reference to item, be defined as:
Ψ(A 1,A 2,....,A c)=tr(S w)-tr(S b) (7)
Wherein tr () asks matrix trace, S wfor the within class scatter matrix of sample, S bfor scatter matrix between the class of sample; By formula (7) substitution formula (2), can obtain:
min A , E &Sigma; i = 1 c { | | A i | | * + &lambda; 1 | | E i | | 1 } + &lambda; 2 ( tr ( S w ) - tr ( S b ) ) s . t . D i = A i + E i - - - ( 8 )
Objective function is further converted to:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 &psi; ( A i ) s . t . D i = A i + E i - - - ( 9 )
Wherein:
&psi; ( A i ) = | | A i - [ m i m i . . . m i ] d &times; n i | | F 2 - &Sigma; j = 1 c | | [ m j m j . . . m j ] d &times; n j - [ mm . . . m ] d &times; n j | | F 2
M jrepresent j the corresponding mean vector of main body, n i, n jrepresent respectively i, the number of samples of j main body;
Due to i main body A iwhile asking low-rank decomposition, other main body A qfixing, i ≠ q, therefore
&zeta; = &Sigma; j = 1 c | | [ m j m j . . . m j ] d &times; n j - [ mm . . . m ] d &times; n j | | F 2 Be a constant, order M i = [ m i m i &CenterDot; &CenterDot; &CenterDot; m i ] d &times; n i , Therefore formula (9) can turn to:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2 - &lambda; 2 &zeta; s . t . D i = A i + E i
Due to λ 2ζ is constant, and objective function finally arranges:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2 s . t . D i = A i + E i - - - ( 10 ) .
The present invention adopts above technical scheme compared with prior art, has following technique effect: adopt low-rank matrix decomposition, effectively removed camouflage in facial image, covered up.The introducing of reference items, has increased the non-correlation between class and class in low-rank matrix greatly, is more conducive to test pattern Classification and Identification.The introducing of DCT algorithm, has realized the normalization of image, has effectively solved the even problem of uneven illumination in facial image.At sorting phase, utilize the thought of cluster, improved recognition speed effectively.This algorithm is carried out to many experiments on standard faces database, and experimental result shows: compare with existing face recognition algorithms, the recognition accuracy of algorithm has all obtained consistent raising with counting yield herein.
Accompanying drawing explanation
Fig. 1 is the human face light normalization schematic diagram based on dct transform.
Fig. 2 (a) is original image schematic diagram during low-rank matrix recovers.
Fig. 2 (b) is low-rank image schematic diagram during low-rank matrix recovers.
Fig. 2 (c) is error image schematic diagram during low-rank matrix recovers.
Fig. 3 is algorithm flow chart of the present invention.
Fig. 4 is by the overlapping piece schematic diagram that is divided into of people's face picture in the present invention.
Fig. 5 is facial image sparse coding and alignment pooling classification schematic diagram in the present invention.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail:
First select the database that will test, for example AR face database.AR database comprises 126 main bodys, totally 4000 width people face pictures.In experiment, we select 50 main bodys from male sex's picture, and from each main body, random 20 width of selecting form training matrix as training picture, and other 6 width form test matrix as test picture.
Training matrix is made to low-rank matrix decomposition, non coherent new low-rank algorithm between class and class in the raising matrix proposing in application the present invention.
The final objective function of algorithm is expressed as:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2 s . t . D i = A i + E i ;
To above-mentioned formula, adopt augmentation Lagrange multiplication ALM, by the corresponding A of each human subject of solution by iterative method i, E ias shown in the figure, wherein Fig. 2 (a) is the training matrix A removing after blocking, and it has more than the initial training matrix D shown in Fig. 2 (b) ability of representative, more be conducive to recognition of face, Fig. 2 (c) is error image schematic diagram during low-rank matrix recovers.Low-rank matrix A is further processed, utilize dct transform to do unitary of illumination to each training image of A, effect is illustrated as shown in Figure 1.
Training matrix A is carried out to overlap partition.A given data set A=[a who is formed by c main body n training picture 1, a 2..., a n], wherein at represents the proper vector of t training picture.What each training picture was superimposed is divided into 12, as shown in Figure 4.Accordingly, A is also divided into
Figure BDA0000417933490000072
from formula, can find out A pl row be to represent l proper vector of training p module of picture.
By same partitioned mode, test picture is also correspondingly divided into 12 module y 1, y 2..., y 12.
At classified part, as shown in Figure 5, there is n main body for one, each main body has the training set of N training sample, and its corresponding sparse coefficient of p module of test picture is x p∈ R nN * 1(1≤p≤12).V wherein pjthe corresponding of a sort sparse coefficient absolute value sum of representative test j main body of p module of picture, that is:
v pj = | | &delta; j ( x p ) | | 1
δ wherein j(x p) represent at x pin with j the related fundamental function of class, therefore
v p=[v p1,v p2,...,v pn] T。The sparse coefficient of modules has been connected:
f = 1 12 &Sigma; p = 1 12 v p
F ∈ R wherein n * 1, by solving peaked coordinate in f (also the element in vector f being sorted), can determine which class test picture belongs to, thereby realize correct classification and identification.
Fig. 3 has summarized whole flow process of the present invention, and concrete steps are as follows:
1. build training and test matrix: the face database that first selection is tested, for each main body in database, the random part of selecting is as training image, another part, as test pattern, combines respectively formation initial training data matrix D and test matrix by the training image of all main bodys and test pattern.
2. low-rank matrix decomposition: low-rank matrix decomposition is that training data matrix D is resolved into A+E, and wherein A represents low-rank matrix,
Sparse error after E representative is decomposed.For given input training data matrix D, low-rank matrix decomposition, by minimizing the order of matrix A, reduces simultaneously || E|| 0the value best low-rank that reaches input data matrix D approach.But the problems referred to above are NP-hard problems, by solving following formula, make traditional low-rank matrix recover to become to be easy to process:
min A , E | | A | | * + &lambda; | | E | | 1 s . t . D = A + E - - - ( 1 )
In formula (1), nuclear norm || A|| *the approximate of matrix A order, zero norm || E|| 0by a norm || E|| 1substitute.
3. the introducing of reference items: in the time of in low-rank matrix decomposition being applied to a face identification system that has a N main body, we can collect corresponding training data matrix D=[D 1, D 2..., D n], D wherein irepresent the data matrix of main body i, comprise blocking of facial image, camouflage etc.When carrying out low-rank matrix decomposition, input data matrix D=[D 1, D 2..., D n] will resolve into low-rank matrix A=[A 1, A 2..., A n] and error matrix E=[E 1, E 2..., E n].Although low-rank matrix A has the stronger ability that represents than training data matrix D, people's face picture of different subjects has some common features, and as position of eyes, nose etc., this also may cause low-rank matrix A can not comprise enough differentiation information.The present invention determines to improve the non-correlation between low-rank matrix, keeps the independence between different low-rank matrixes as far as possible.According to the formula of low-rank matrix decomposition (1), new objective function is expressed as follows:
min A , E &Sigma; i = 1 c { | | A i | | * + &lambda; 1 | | E i | | 1 } + &lambda; 2 &Psi; ( A 1 , A 2 , . . . . , A c ) s . t . D i = A i + E i - - - ( 2 )
In formula (2), i=1,2 ..., c, c is the number of main body in training data matrix.D ibe i training sample data matrix, A ifor corresponding low-rank decomposition matrix, E ifor sparse error matrix.Ψ (A 1, A 2..., A c) for improving the reference items of low-rank matrix separating capacity, parameter lambda 1positive weight coefficient, parameter lambda 2for constant and λ 2>=0;
4. the abbreviation of objective function: reference items Ψ (A 1, A 2..., A c) design be not only conducive to dictionary learning, can also improve as much as possible the resolving ability of dictionary.The object of linear discriminant analysis LDA is from high-dimensional feature space, to extract the low dimensional feature most with discriminating power, and these features can help other all samples of same class to flock together, and different classes of sample as far as possible separately.According to the criterion of LDA, first define some mean vectors as follows:
The mean vector of Different categories of samples:
m i = 1 n i &Sigma; a &Element; A i a - - - ( 3 )
The population mean vector of sample:
m = 1 n &Sigma; i = 1 c n i &CenterDot; m i - - - ( 4 )
The within class scatter matrix of sample:
S w = &Sigma; i = 1 c &Sigma; a &Element; A i ( a - m i ) ( a - m i ) T - - - ( 5 )
Scatter matrix between the class of sample:
S b = &Sigma; j = 1 c n i ( m i - m ) ( m i - m ) T - - - ( 6 )
In formula (3) (4) (5) (6), ni is the number of sample in i main body, the total n=n that n is training sample 1+ n 2+ ... + n c, A ibe the training data matrix of i main body, a represents A iin column vector.
According to the characteristic of discrete matrix in discrete matrix and class between the principle of LDA and sample class, with reference to item, be defined as:
Ψ(A 1,A 2,....,A c)=tr(S w)-tr(S b) (7)
Wherein tr () asks matrix trace.By formula (7) substitution formula (2), can obtain:
min A , E &Sigma; i = 1 c { | | A i | | * + &lambda; 1 | | E i | | 1 } + &lambda; 2 ( tr ( S w ) - tr ( S b ) ) s . t . D i = A i + E i - - - ( 8 )
Because algorithm is that the raw data matrix of different subjects is asked respectively to low-rank matrix decomposition, therefore for fear of the direct solution to formula (8), objective function can turn to:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 &psi; ( A i ) s . t . D i = A i + E i - - - ( 9 )
Wherein
Figure BDA0000417933490000108
Due to when i main body A i asked to low-rank decomposition, other main body A q (i ≠ q) is fixing, therefore
&zeta; = &Sigma; j = 1 c | | [ m j m j . . . m j ] d &times; n j - [ mm . . . m ] d &times; n j | | F 2 Be a constant, order
Figure BDA0000417933490000109
d becomes the line number after column vector for people's face image conversion.Therefore formula (9) can turn to:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2 - &lambda; 2 &zeta; s . t . D i = A i + E i
Due to λ 2ζ is constant, and objective function finally arranges:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2 s . t . D i = A i + E i - - - ( 10 ) .
5. the algorithm optimization based on ALM: augmentation Lagrange multiplication (ALM) is widely used in solving as a kind of algorithm of standard the problem that low-rank matrix recovers.Solving the minimum value while constraint condition of f (X)
During the optimization problem of h (X)=0, the objective function of ALM is defined as:
L ( X , Y , u ) = f ( X ) + < Y , h > + u 2 | | h ( X ) | | F 2 - - - ( 11 )
In formula, Y represents Lagrange's multiplier, and u represents a positive number parameter.According to formula (11), order
f ( X ) = | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2
h(X)=D i-A i-E i
Formula (10) is utilizing the formula of ALM Algorithm for Solving to be expressed as follows: make A i '=A i-M i
L ( A i , E i , Y i , &mu; , &lambda; 2 ) = | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i &prime; | | F 2 + < Y i , D i - A i - E i > + u 2 | | D i - A i - E i | | F 2 - - - ( 12 )
6. the renewal of low-rank matrix A: for formula (12), by iterative A.
A i k + 1 = arg min A i L ( A i , E i k , Y i k , &mu; k , &lambda; 2 ) = arg min A i | | A i | | * + &lambda; 2 | | A i &prime; | | F 2 + < Y i k , D i - A i - E i k > + &mu; k 2 | | D i - A i - E i k | | F 2 = arg min A i | | A i | | * + ( &lambda; 2 + &mu; k 2 ) < A i , A i &prime; > - &mu; k < D i - E i k + 1 &mu; k Y i k , A i &prime; > = arg min A i &epsiv; | | A i | | * + 1 2 | | X a - A i | | F 2
In formula ε=(2 λ 2+ μ k)-1, k represents iterations.By Singular Value Decomposition Using, U wherein, V is unitary matrix, S is diagonal matrix,
A i k + 1 = US &epsiv; V T where ( U , S , V T ) = SVD ( X a ) ,
7. the renewal of error matrix E:
E i k + 1 = arg min E i L ( A i k + 1 , E i , Y i k , &mu; k , &lambda; 2 ) = arg min E i | | E i | | 1 + < Y i k , A i k + 1 + E i - D i > + &mu; k 2 | | A i k + 1 + E i - D i | | F 2 = arg min E i &epsiv; &prime; | | E i | | 1 + 1 2 | | X e - E i | | F 2
In formula &epsiv; &prime; = 1 u k , X e = D i - A i k + 1 + 1 u k Y i k . Work as A i, E iall iteration is out time, and low-rank matrix decomposition also just completes substantially.
8.DCT unitary of illumination: for the A application discrete cosine transform (DCT) through low-rank matrix decomposition, can effectively process facial image uneven illumination, can reach again the object of unitary of illumination.
9. facial image overlap partition: process and the low-rank training matrix A=[a of unitary of illumination through low-rank 1, a 2... a t..., a n], a wherein trepresent t training picture proper vector, by each training picture overlapping be divided into 12.Accordingly, A is also divided into
Figure BDA0000417933490000122
a p(1≤p≤12) represent the data matrix that p piece of all training pictures forms, and doubling of the image piecemeal as shown in Figure 4.
10. test pattern piecemeal: press the partitioned mode of training image, test pattern is also correspondingly divided into 12 module y 1, y 2..., y 12.
Solving of the sparse coefficient of 11. module: John Wright proposes a kind of face recognition algorithms (SRC) based on rarefaction representation.SRC thinks that each test picture is the linear combination of training pictures, can minimize to solve sparse coding problem wherein by l1.The theory relevant according to SRC can obtain, for any one test module y p, have:
y p=A px p (13)
Because this is a under determined system, formula (13) has a lot of solutions.In order to obtain a unique stable solution, need to add a constraint condition.Because l 2the fairly simple convenience of Norm minimumization, becomes the most general constraint condition.Yet the sparse coefficients comparison that it produces is dense, to test pattern, identification does not have too large benefit.In order to obtain a more satisfactory sparse solution, we have utilized l 1norm minimum.Its optimization problem:
x p = arg min x { | | A p x - y p | | 2 2 + &lambda; | | x | | 1 } for 1 &le; p &le; 12 - - - ( 14 )
X in formula prepresent p the corresponding sparse coefficient of module.
12.pooling sorting algorithm: in the recognition of face based on rarefaction representation (SRC), be all generally by solving residual error r psize judge which training sample test picture belongs to, thereby realize correctly classification:
For fear of separately module being processed, ignore the correlativity between module, the present invention makes full use of the sparse coefficient of disparate modules, use for reference the thought of alignment pooling, first by the sparse coefficient pooling of disparate modules same class together, then to the sparse coefficient of same class sort from big to small (alignment), the relevant information between module can be utilized like this, module that weight ratio the is higher importance in assorting process can be given prominence to again.Have c main body for one, each main body has the training set of N training sample, and its corresponding sparse coefficient of p module of test picture is x p∈ R nN * 1(1≤p≤12).V wherein pjthe corresponding of a sort sparse coefficient absolute value sum of representative test j main body of p module of picture, that is:
v pj=||δ j(x p)|| 1
δ wherein j(x p) represent at x pin with j the related fundamental function of class, so v p=[v p1, v p2..., v pn] t.Utilize the thought of pooling, the sparse coefficient of modules connected:
f = 1 12 &Sigma; p = 1 12 v p
F ∈ R wherein n * 1, by solving peaked coordinate in f (also the element in vector f being carried out to alignment), can determine which class test picture belongs to, thereby realize correct classification and identification.

Claims (3)

1. a face identification method for low-rank piecemeal rarefaction representation, is characterized in that, concrete steps are as follows:
Step 1, for each main body in face database, random its part of selecting is as training image, another part, as test pattern, is integrated respectively formation initial training data matrix and test matrix by the training image of all main bodys and test pattern;
Step 2, training data matrix D is resolved into A+E, wherein A represents low-rank decomposition matrix, and the sparse error after E representative is decomposed, by minimizing the order of low-rank decomposition matrix A, reduces zero norm simultaneously || E|| 0the value best low-rank that reaches training data matrix D approach, the formula of low-rank decomposition matrix decomposition:
min A , E | | A | | * + &lambda; | | E | | 1 s . t . D = A + E - - - ( 1 )
In formula (1), nuclear norm || A|| *the approximate value of low-rank decomposition matrix A order, a norm || E|| 1zero norm || E|| 0substitution value, λ is parameter;
Step 3, introducing reference items, according to the formula of low-rank decomposition matrix decomposition (1), set up objective function:
min A , E &Sigma; i = 1 c { | | A i | | * + &lambda; 1 | | E i | | 1 } + &lambda; 2 &Psi; ( A 1 , A 2 , . . . . , A c ) s . t . D i = A i + E i - - - ( 2 )
In formula (2), i=1,2 ..., c, c is the number of class in training data matrix, D ibe i training data matrix, A ibe i low-rank decomposition matrix, E ibe i sparse error matrix, Ψ (A 1, A 2..., A c) for improving the reference items of low-rank decomposition matrix separating capacity, parameter lambda 1positive weight coefficient, parameter lambda 2for constant and λ 2>=0;
Step 4, objective function is carried out to abbreviation, by final arrangement of objective function, is:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - m i | | F 2 s . t . D i = A i + E i - - - ( 10 )
Wherein
Figure FDA0000417933480000015
d becomes the line number after column vector for people's face image conversion, m irepresent the mean vector of i main body, n irepresent the number of sample in i main body, || || frepresentative is f norm;
Step 5, utilize augmentation Lagrange multiplication to process formula (10):
L ( A i , E i , Y i , &mu; , &lambda; 2 ) = | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i &prime; | | F 2 + < Y i , D i - A i - E i > + u 2 | | D i - A i - E i | | F 2 - - - ( 12 )
Wherein, A i'=A i-M i, Y irepresent i the corresponding Lagrange's multiplier of main body, object is by iteration, to ask for the extreme value of function, and μ represents a positive number parameter;
Step 6, for formula (12) by iterative A, low-rank decomposition matrix A is upgraded:
A i k + 1 = arg min A i L ( A i , E i k , Y i k &mu; k , &lambda; 2 ) = arg min A i | | A i | | * + &lambda; 2 | | A i &prime; | | F 2 + < Y i k , D i - A i - E i k > + &mu; k 2 | | D i - A i - E i k | | F 2 = arg min A i | | A i | | * + ( &lambda; 2 + &mu; k 2 ) < A i , A i &prime; > - &mu; k < D i - E i k + 1 &mu; k Y i k , A i &prime; > = arg min A i &epsiv; | | A i | | * + 1 2 | | X a - A i | | F 2
A i k + 1 = US &epsiv; V T where ( U , S , V T ) = SVD ( X a )
In formula
Figure FDA0000417933480000027
ε=(2 λ 2+ μ k) -1, k represents iterations,
Figure FDA0000417933480000023
represent the value of i sparse error matrix after k iteration, Y i krepresent the value of i the corresponding Lagrange's multiplier of main body after k iteration, μ krepresent the value of positive number parameter after k iteration, by regulating Y i kvalue control iterative process, SVD represents Singular Value Decomposition Using, wherein U, V are unitary matrix, S is diagonal matrix;
Step 7, error matrix E is upgraded:
E i k + 1 = arg min E i L ( A i k + 1 , E i , Y i k , &mu; k , &lambda; 2 ) = arg min E i | | E i | | 1 + < Y i k , A i k + 1 + E i - D i > + &mu; k 2 | | A i k + 1 + E i - D i | | F 2 = arg min E i &epsiv; &prime; | | E i | | 1 + 1 2 | | X e - E i | | F 2
In formula &epsiv; &prime; = 1 &mu; k , X e = D i - A i k + 1 + 1 &mu; k Y i k ;
Step 8, the A application discrete cosine transform to process low-rank decomposition matrix decomposition, realize unitary of illumination and process;
Step 9, training image is carried out to overlap partition;
Step 10, contrast step 2, to the partitioned mode of step 9 training image, are also correspondingly carried out piecemeal processing by test picture;
Step 11, the face recognition algorithms of utilization based on rarefaction representation, in conjunction with l 1norm minimum, solves the sparse coefficient of module:
x p = arg min x { | | A p x - y p | | 2 2 + &lambda; | | x | | 1 } for 1 &le; p &le; 12
In formula, x is sparse coefficient, x prepresent p the corresponding sparse coefficient of module, y pthe column vector that p module of representative test picture forms;
Step 12, utilize the sparse coefficient of disparate modules, the sparse coefficient of disparate modules same class is combined, again the sparse coefficient of same class is sorted from big to small, for a training data matrix that has c main body, each class has the training set of N training sample, and its corresponding sparse coefficient of p module of test picture is:
x p∈R cN×1,1≤p≤12
v pj=δ j(x p) 1
V pjthe corresponding of a sort sparse coefficient absolute value sum of representative test j main body of p module of picture, δ j(x p) represent at x pin with j the related fundamental function of main body, the sum that n is training sample, R cN * 1the column vector of a cN dimension, 1≤p≤12;
Therefore:
v p=[v p1,v p2,...,v pc] T
The sparse coefficient of modules has been connected:
f = 1 12 &Sigma; p = 1 12 v p
F ∈ R c * 1, R c * 1be the column vector of a c dimension, by solving peaked coordinate in f, determine which main body test picture belongs to, thereby realize correct classification and identification.
2. the face identification method of a kind of low-rank piecemeal rarefaction representation as claimed in claim 1, is characterized in that, the detailed process of described step 9 is:
Low-rank decomposition matrix A=[a through low-rank processing and unitary of illumination 1, a 2... a t..., a n], a wherein trepresent the proper vector of the t training picture, 1≤t≤n, by each training picture overlapping be divided into 12, A is also divided into
Figure FDA0000417933480000041
a prepresent the data matrix that p piece of all training pictures forms, 1≤p≤12.
3. the face identification method of a kind of low-rank piecemeal rarefaction representation as claimed in claim 1, is characterized in that, the detailed process of described step 4 is as follows: according to the criterion of LDA, definition mean vector is as follows:
1. the mean vector of Different categories of samples:
m i = 1 n i &Sigma; a &Element; A i a - - - ( 3 )
2. the population mean of sample is vectorial:
m = 1 n &Sigma; i = 1 c n i &CenterDot; m i - - - ( 4 )
3. the within class scatter matrix of sample:
S w = &Sigma; i = 1 c &Sigma; a &Element; A i ( a - m i ) ( a - m i ) T - - - ( 5 )
4. scatter matrix between the class of sample:
S b = &Sigma; j = 1 c n i ( m i - m ) ( m i - m ) T - - - ( 6 )
In formula (3), (4), (5), (6), the sum that n is training sample, n=n 1+ n 2+ ... + n c, a represents A iin column vector,
According to the characteristic of discrete matrix in discrete matrix and class between the principle of LDA and sample class, with reference to item, be defined as:
Ψ(A 1,A 2,....,A c)=tr(S w)-tr(S b) (7)
Wherein tr () asks matrix trace, S wfor the within class scatter matrix of sample, S bfor scatter matrix between the class of sample; By formula (7) substitution formula (2), can obtain:
min A , E &Sigma; i = 1 c { | | A i | | * + &lambda; 1 | | E i | | 1 } + &lambda; 2 ( tr ( S w ) - tr ( S b ) ) s . t . D i = A i + E i - - - ( 8 )
Objective function is further converted to:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 &psi; ( A i ) s . t . D i = A i + E i - - - ( 9 )
Wherein:
&psi; ( A i ) = | | A i - [ m i m i . . . m i ] d &times; n i | | F 2 - &Sigma; j = 1 c | | [ m j m j . . . m j ] d &times; n j - [ mm . . . m ] d &times; n j | | F 2
M jrepresent j the corresponding mean vector of main body, n i, n jrepresent respectively i, the number of samples of j main body;
Due to i main body A iwhile asking low-rank decomposition, other main body A qfixing, i ≠ q, therefore
&zeta; = &Sigma; j = 1 c | | [ m j m j . . . m j ] d &times; n j - [ mm . . . m ] d &times; n j | | F 2 Be a constant, order M i = [ m i m i &CenterDot; &CenterDot; &CenterDot; m i ] d &times; n i , Therefore formula (9) can turn to:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2 - &lambda; 2 &zeta; s . t . D i = A i + E i
Due to λ 2ζ is constant, and objective function finally arranges:
min A i , E i | | A i | | * + &lambda; 1 | | E i | | 1 + &lambda; 2 | | A i - M i | | F 2 s . t . D i = A i + E i - - - ( 10 ) .
CN201310586448.1A 2013-11-20 2013-11-20 A kind of face identification method of low-rank piecemeal rarefaction representation Active CN103632138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310586448.1A CN103632138B (en) 2013-11-20 2013-11-20 A kind of face identification method of low-rank piecemeal rarefaction representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310586448.1A CN103632138B (en) 2013-11-20 2013-11-20 A kind of face identification method of low-rank piecemeal rarefaction representation

Publications (2)

Publication Number Publication Date
CN103632138A true CN103632138A (en) 2014-03-12
CN103632138B CN103632138B (en) 2016-09-28

Family

ID=50213167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310586448.1A Active CN103632138B (en) 2013-11-20 2013-11-20 A kind of face identification method of low-rank piecemeal rarefaction representation

Country Status (1)

Country Link
CN (1) CN103632138B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298977A (en) * 2014-10-24 2015-01-21 西安电子科技大学 Low-order representing human body behavior identification method based on irrelevance constraint
CN104392246A (en) * 2014-12-03 2015-03-04 北京理工大学 Inter-class inner-class face change dictionary based single-sample face identification method
CN104616027A (en) * 2015-02-06 2015-05-13 华东交通大学 Non-adjacent graph structure sparse face recognizing method
CN104657938A (en) * 2015-02-03 2015-05-27 中国人民解放军国防科学技术大学 Low-rank constraint-based license plate correction algorithm
CN104715266A (en) * 2015-03-12 2015-06-17 西安电子科技大学 Image characteristics extracting method based on combination of SRC-DP and LDA
CN105718934A (en) * 2016-01-25 2016-06-29 无锡中科富农物联科技有限公司 Method for pest image feature learning and identification based on low-rank sparse coding technology
CN105957026A (en) * 2016-04-22 2016-09-21 温州大学 De-noising method based on recessive low-rank structure inside and among nonlocal similar image blocks
CN106295609A (en) * 2016-08-22 2017-01-04 河海大学 The single sample face recognition method represented based on block sparsity structure low-rank
CN107169410A (en) * 2017-03-31 2017-09-15 南京邮电大学 The structural type rarefaction representation sorting technique based on LBP features for recognition of face
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA
CN107392128A (en) * 2017-07-13 2017-11-24 南京邮电大学 The robust image recognition methods returned based on double low-rank representations and local constraint matrix
CN107392134A (en) * 2017-07-14 2017-11-24 广州智慧城市发展研究院 A kind of face identification method and system based on joint piecemeal
CN107590505A (en) * 2017-08-01 2018-01-16 天津大学 The learning method of joint low-rank representation and sparse regression
CN107808391A (en) * 2017-10-30 2018-03-16 浙江工业大学 Video dynamic target extraction method based on feature selection and smooth representation clustering
CN107992449A (en) * 2017-12-05 2018-05-04 北京工业大学 A kind of subway anomalous traffic detection method based on low-rank representation
CN108446589A (en) * 2018-02-07 2018-08-24 杭州电子科技大学 Face identification method based on low-rank decomposition and auxiliary dictionary under complex environment
CN109063555A (en) * 2018-06-26 2018-12-21 杭州电子科技大学 The Pose-varied face recognition method compared based on low-rank decomposition and rarefaction representation residual error
CN110069978A (en) * 2019-03-04 2019-07-30 杭州电子科技大学 The face identification method that the non-convex low-rank decomposition of identification and superposition Sparse indicate
CN110265039A (en) * 2019-06-03 2019-09-20 南京邮电大学 A kind of method for distinguishing speek person decomposed based on dictionary learning and low-rank matrix
CN110633732A (en) * 2019-08-15 2019-12-31 电子科技大学 Multi-modal image recognition method based on low-rank and joint sparsity
CN110889345A (en) * 2019-11-15 2020-03-17 重庆邮电大学 Method for recovering shielding face by distinguishing low-rank matrix based on cooperative representation and classification
CN111027636A (en) * 2019-12-18 2020-04-17 山东师范大学 Unsupervised feature selection method and system based on multi-label learning
CN111048117A (en) * 2019-12-05 2020-04-21 南京信息工程大学 Cross-library speech emotion recognition method based on target adaptation subspace learning
CN111967306A (en) * 2020-07-02 2020-11-20 广东技术师范大学 Target remote monitoring method and device, computer equipment and storage medium
CN112215034A (en) * 2019-07-10 2021-01-12 重庆邮电大学 Occlusion face recognition method based on robust representation and classification of matrix decomposition and Gabor features
CN112784795A (en) * 2021-01-30 2021-05-11 深圳市心和未来教育科技有限公司 Quick face recognition and analysis equipment and system
CN117351299A (en) * 2023-09-13 2024-01-05 北京百度网讯科技有限公司 Image generation and model training method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050180626A1 (en) * 2004-02-12 2005-08-18 Nec Laboratories Americas, Inc. Estimating facial pose from a sparse representation
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method
CN103268484A (en) * 2013-06-06 2013-08-28 温州大学 Design method of classifier for high-precision face recognitio

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050180626A1 (en) * 2004-02-12 2005-08-18 Nec Laboratories Americas, Inc. Estimating facial pose from a sparse representation
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method
CN103268484A (en) * 2013-06-06 2013-08-28 温州大学 Design method of classifier for high-precision face recognitio

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄兵 等: "基于Gabor小波与LBP直方图序列的人脸年龄估计", 《数据采集与处理》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298977B (en) * 2014-10-24 2017-11-03 西安电子科技大学 A kind of low-rank representation Human bodys' response method constrained based on irrelevance
CN104298977A (en) * 2014-10-24 2015-01-21 西安电子科技大学 Low-order representing human body behavior identification method based on irrelevance constraint
CN104392246B (en) * 2014-12-03 2018-02-16 北京理工大学 It is a kind of based between class in class changes in faces dictionary single sample face recognition method
CN104392246A (en) * 2014-12-03 2015-03-04 北京理工大学 Inter-class inner-class face change dictionary based single-sample face identification method
CN104657938A (en) * 2015-02-03 2015-05-27 中国人民解放军国防科学技术大学 Low-rank constraint-based license plate correction algorithm
CN104657938B (en) * 2015-02-03 2016-04-20 中国人民解放军国防科学技术大学 A kind of VLP correction algorithm based on low-rank constraint
CN104616027B (en) * 2015-02-06 2018-09-11 华东交通大学 A kind of sparse face identification method of non-adjacent graph structure
CN104616027A (en) * 2015-02-06 2015-05-13 华东交通大学 Non-adjacent graph structure sparse face recognizing method
CN104715266A (en) * 2015-03-12 2015-06-17 西安电子科技大学 Image characteristics extracting method based on combination of SRC-DP and LDA
CN104715266B (en) * 2015-03-12 2018-03-27 西安电子科技大学 The image characteristic extracting method being combined based on SRC DP with LDA
CN105718934A (en) * 2016-01-25 2016-06-29 无锡中科富农物联科技有限公司 Method for pest image feature learning and identification based on low-rank sparse coding technology
CN105957026A (en) * 2016-04-22 2016-09-21 温州大学 De-noising method based on recessive low-rank structure inside and among nonlocal similar image blocks
CN105957026B (en) * 2016-04-22 2019-02-05 温州大学 Based on the denoising method inside non local similar image block recessiveness low-rank structure between block
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA
CN106295609A (en) * 2016-08-22 2017-01-04 河海大学 The single sample face recognition method represented based on block sparsity structure low-rank
CN106295609B (en) * 2016-08-22 2019-05-10 河海大学 Single sample face recognition method based on block sparsity structure low-rank representation
CN107169410A (en) * 2017-03-31 2017-09-15 南京邮电大学 The structural type rarefaction representation sorting technique based on LBP features for recognition of face
CN107392128A (en) * 2017-07-13 2017-11-24 南京邮电大学 The robust image recognition methods returned based on double low-rank representations and local constraint matrix
CN107392134A (en) * 2017-07-14 2017-11-24 广州智慧城市发展研究院 A kind of face identification method and system based on joint piecemeal
CN107590505A (en) * 2017-08-01 2018-01-16 天津大学 The learning method of joint low-rank representation and sparse regression
CN107808391B (en) * 2017-10-30 2020-10-02 浙江工业大学 Video dynamic target extraction method based on feature selection and smooth representation clustering
CN107808391A (en) * 2017-10-30 2018-03-16 浙江工业大学 Video dynamic target extraction method based on feature selection and smooth representation clustering
CN107992449A (en) * 2017-12-05 2018-05-04 北京工业大学 A kind of subway anomalous traffic detection method based on low-rank representation
CN107992449B (en) * 2017-12-05 2021-04-30 北京工业大学 Subway abnormal flow detection method based on low-rank representation
CN108446589A (en) * 2018-02-07 2018-08-24 杭州电子科技大学 Face identification method based on low-rank decomposition and auxiliary dictionary under complex environment
CN108446589B (en) * 2018-02-07 2022-03-22 杭州电子科技大学 Face recognition method based on low-rank decomposition and auxiliary dictionary in complex environment
CN109063555A (en) * 2018-06-26 2018-12-21 杭州电子科技大学 The Pose-varied face recognition method compared based on low-rank decomposition and rarefaction representation residual error
CN110069978B (en) * 2019-03-04 2021-04-13 杭州电子科技大学 Discriminating non-convex low-rank decomposition and superposition linear sparse representation face recognition method
CN110069978A (en) * 2019-03-04 2019-07-30 杭州电子科技大学 The face identification method that the non-convex low-rank decomposition of identification and superposition Sparse indicate
CN110265039A (en) * 2019-06-03 2019-09-20 南京邮电大学 A kind of method for distinguishing speek person decomposed based on dictionary learning and low-rank matrix
CN110265039B (en) * 2019-06-03 2021-07-02 南京邮电大学 Speaker recognition method based on dictionary learning and low-rank matrix decomposition
CN112215034A (en) * 2019-07-10 2021-01-12 重庆邮电大学 Occlusion face recognition method based on robust representation and classification of matrix decomposition and Gabor features
CN110633732A (en) * 2019-08-15 2019-12-31 电子科技大学 Multi-modal image recognition method based on low-rank and joint sparsity
CN110633732B (en) * 2019-08-15 2022-05-03 电子科技大学 Multi-modal image recognition method based on low-rank and joint sparsity
CN110889345A (en) * 2019-11-15 2020-03-17 重庆邮电大学 Method for recovering shielding face by distinguishing low-rank matrix based on cooperative representation and classification
CN111048117B (en) * 2019-12-05 2022-06-17 南京信息工程大学 Cross-library speech emotion recognition method based on target adaptation subspace learning
CN111048117A (en) * 2019-12-05 2020-04-21 南京信息工程大学 Cross-library speech emotion recognition method based on target adaptation subspace learning
CN111027636A (en) * 2019-12-18 2020-04-17 山东师范大学 Unsupervised feature selection method and system based on multi-label learning
CN111027636B (en) * 2019-12-18 2020-09-29 山东师范大学 Unsupervised feature selection method and system based on multi-label learning
CN111967306B (en) * 2020-07-02 2021-09-14 广东技术师范大学 Target remote monitoring method and device, computer equipment and storage medium
CN111967306A (en) * 2020-07-02 2020-11-20 广东技术师范大学 Target remote monitoring method and device, computer equipment and storage medium
CN112784795A (en) * 2021-01-30 2021-05-11 深圳市心和未来教育科技有限公司 Quick face recognition and analysis equipment and system
CN117351299A (en) * 2023-09-13 2024-01-05 北京百度网讯科技有限公司 Image generation and model training method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN103632138B (en) 2016-09-28

Similar Documents

Publication Publication Date Title
CN103632138A (en) Low-rank partitioning sparse representation human face identifying method
Chang et al. Unsupervised transfer learning via multi-scale convolutional sparse coding for biomedical applications
Lee et al. Self-attention graph pooling
CN104392246B (en) It is a kind of based between class in class changes in faces dictionary single sample face recognition method
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
Li et al. Subspace clustering by mixture of gaussian regression
CN104700087B (en) The method for mutually conversing of visible ray and near-infrared facial image
Wang et al. Attractive or not? Beauty prediction with attractiveness-aware encoders and robust late fusion
CN108647690A (en) The sparse holding projecting method of differentiation for unconstrained recognition of face
CN106056088B (en) The single sample face recognition method of criterion is generated based on adaptive virtual sample
Suo et al. Structured dictionary learning for classification
CN105574475A (en) Common vector dictionary based sparse representation classification method
CN104700089A (en) Face identification method based on Gabor wavelet and SB2DLPP
Zheng et al. Improved sparse representation with low-rank representation for robust face recognition
Kastaniotis et al. HEp-2 cell classification with vector of hierarchically aggregated residuals
Cherian et al. Denoising sparse noise via online dictionary learning
Sun et al. Appearance Prompt Vision Transformer for Connectome Reconstruction.
CN110633732B (en) Multi-modal image recognition method based on low-rank and joint sparsity
CN107784284A (en) Face identification method and system
Dhamija et al. A novel active shape model-based DeepNeural network for age invariance face recognition
You et al. Robust structure low-rank representation in latent space
Sangamesh et al. A Novel Approach for Recognition of Face by Using Squeezenet Pre-Trained Network
Bao et al. Flow-based point cloud completion network with adversarial refinement
CN117095191A (en) Image clustering method based on projection distance regularized low-rank representation
Nozaripour et al. Image classification via convolutional sparse coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 211500 No. 219 Ning six road, Jiangbei new district, Nanjing, Jiangsu

Patentee after: Nanjing University of Information Science and Technology

Address before: Zhongshan road Wuzhong District Mudu town of Suzhou city in Jiangsu province 215101 No. 70 Wuzhong Science Park Building 2 room 2310

Patentee before: Nanjing University of Information Science and Technology

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231109

Address after: 518, 5th Floor, No. 2 Jinguang South Street, Xilu Street, Fangshan District, Beijing, 102400

Patentee after: Beijing Mingde Xinmin Sports Culture Co.,Ltd.

Address before: 219 ningliu Road, Jiangbei new district, Nanjing City, Jiangsu Province

Patentee before: Nanjing University of Information Science and Technology