CN110378262B - Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium - Google Patents

Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium Download PDF

Info

Publication number
CN110378262B
CN110378262B CN201910610652.XA CN201910610652A CN110378262B CN 110378262 B CN110378262 B CN 110378262B CN 201910610652 A CN201910610652 A CN 201910610652A CN 110378262 B CN110378262 B CN 110378262B
Authority
CN
China
Prior art keywords
matrix
module
kernel
training sample
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910610652.XA
Other languages
Chinese (zh)
Other versions
CN110378262A (en
Inventor
陈文胜
陈海涛
黄显坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201910610652.XA priority Critical patent/CN110378262B/en
Publication of CN110378262A publication Critical patent/CN110378262A/en
Application granted granted Critical
Publication of CN110378262B publication Critical patent/CN110378262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The invention provides a kernel nonnegative matrix factorization face recognition method, a system and a storage medium based on an additive Gaussian kernel. The invention has the beneficial effects that: the additive Gaussian kernel-based kernel nonnegative matrix decomposition face recognition method has robustness on noise and higher convergence rate, and compared with the existing correlation algorithm, the method has certain superiority and robustness.

Description

Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium
Technical Field
The invention relates to the technical field of face recognition, in particular to a method, a device and a system for recognizing a face by using an additive Gaussian kernel-based kernel nonnegative matrix decomposition and a storage medium.
Background
With the advent of the information age, biometric identification technology for identifying an individual's identity using physiological and behavioral characteristics inherent to a human body has become one of the most active research fields. Among the many branches of biometric technology, one of the most well accepted techniques is face recognition technology, because face recognition is non-invasive, non-mandatory, non-contact, and concurrent with respect to other biometric technologies.
The face recognition technology comprises two stages, wherein the first stage is feature extraction, namely extracting face feature information in a face image, and the first stage directly determines the quality of the face recognition technology; the second stage is identity authentication, and personal identity authentication is carried out according to the extracted characteristic information. Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) are more classical feature extraction methods, but feature vectors proposed by the two methods usually contain negative elements, so that the methods have no rationality and interpretability under the condition that an original sample is non-negative data. non-Negative Matrix Factorization (NMF) is a feature extraction method for processing non-negative data, and its application is very wide, such as hyperspectral data processing, face image recognition, etc. The NMF algorithm has non-negative limitation on the extracted features in the original sample non-negative data matrix decomposition process, namely all components after decomposition are non-negative, so that non-negative sparse features can be extracted. The essence of the NMF algorithm is that the non-negative matrix X is approximately decomposed as the product of the base image matrix W and the coefficient matrix H, i.e. X ≈ WH, and both W and H are non-negative matrices. Thus each column of the matrix X can be represented as a non-negative linear combination of the vectors of the columns of the matrix W, which also follows the construction basis of the NMF algorithm-the perception of the whole is made up of the perception of the parts that make up the whole (purely additive). In recent years, many algorithms for NMF deformation have been proposed by scholars, such as a robust NMF algorithm (RNMF) for enhancing algorithm robustness, a map NMF algorithm (GNMF) for maintaining local features, and an orthogonal NMF algorithm (ONMF) for introducing orthogonal constraints. However, these NMF algorithms are all linear methods. In the process of face recognition, the face image becomes very complicated due to interference factors such as shading, illumination, expression and the like. The face recognition problem becomes a non-linear problem, so the linear method is no longer applicable.
For processing the nonlinear problem, the kernel method is an effective method, and provides a delicate theoretical framework for expanding the linear algorithm into the nonlinear algorithm. The basic idea of the kernel method is to map the original data into a high-dimensional feature space by using a non-linear mapping function so that the mapped data is linearly separable, and then apply a linear algorithm to the mapped data. In the kernel approach, the most critical part is the use of kernel techniques, by replacing the inner product of the mapped data with kernel functions, and thus there is no need to know the specific analytic expression of the non-linear mapping function. The use of kernel trick reduces the difficulty of extending the mapping to functional space, i.e., to Regenerate Kernel Hilbert Space (RKHS). Polynomial kernels and gaussian kernels are two commonly used kernel functions. With the kernel method, the linear algorithm NMF can be generalized to the kernel NMF algorithm (KNMF). The main idea of the KNMF algorithm is to map matrix X into a high-dimensional feature space by a non-linear mapping function phi, and in this feature space, approximately decompose matrix phi (X) into the product of two matrices phi (W) and H by using the NMF algorithm, i.e., phi (X) ≈ phi (W) H, and W and H are non-negative matrices.
The existing KNMF algorithm comprises a polynomial kernel nonnegative matrix decomposition algorithm (PNMF) and a Gaussian kernel nonnegative matrix decomposition algorithm (RBFNMF), and the two kernel functions are sensitive to noise and abnormal values, so that the stability is poor.
The related technical scheme is as follows:
1. the nuclear method comprises the following steps:
let { x 1 ,x 2 ,…,x n Is a set of data in the original sample space. The main idea of the kernel method is to map the samples from the original space to a higher dimensional feature space by a non-linear mapping function phi (-) such that the samples are linearly separable in this feature space and such a high dimensional feature space must exist as long as the original space is of finite dimensions. In this feature space, a linear method can be used to process the sample data. But the feature space dimension may be very high, even infinite in dimension, and the particular form of the non-linear mapping is difficult to determine. To circumvent these obstacles, the kernel function can be used ingeniously:
k(x i ,x j )=<φ(x i ),φ(x j )>=φ(x i ) Τ φ(x j ),
i.e. x i And x j Inner products in feature space can be obtained by utilizing their function in original sample spaceThe number k (·, ·). This not only solves these problems, but also simplifies the calculation process.
Commonly used kernel functions have polynomial kernel functions
Figure BDA0002122248720000021
And Gaussian Kernel function (RBF) k (x) i ,x j )=exp(-||x i -x j || 2 /(2δ 2 ))。
2. Nuclear non-negative matrix factorization algorithm (KNMF)
The main purpose of KNMF is to solve the application of NMF in the non-linearity problem using a nuclear approach. Firstly, sample data in an original space
Figure BDA0002122248720000031
Mapping the mapping function phi (-) to a high-dimensional feature space to obtain mapped sample data phi (X) = [ phi (X) = 1 ),φ(x 2 ),…,φ(x n )]So that the sample data is linearly separable. The mapped data is then processed in a high-dimensional feature space using an NMF algorithm to approximately decompose φ (X) into the product of two matrices φ (W) and H, i.e.
φ(X)≈φ(W)H,
Wherein
Figure BDA0002122248720000032
Is a matrix of the base image,
Figure BDA0002122248720000033
is a matrix of coefficients. To measure the loss in the matrix decomposition process, we need to construct a loss function F (W, H), and the smaller the value of the loss function, the more reasonable the decomposed matrix is. Therefore, the optimization problem to be solved by KNMF is:
Figure BDA0002122248720000034
the loss function F (W, H) is defined here as follows:
Figure BDA0002122248720000035
in the KNMF algorithm, the most important factor is the selection of the kernel function k (·, ·), which implicitly defines a high-dimensional feature space, and if the kernel function is not properly selected, it means that the sample data is mapped to an improper feature space, which may result in poor performance. .
3. Nonlinear nonnegative matrix factorization (PNMF) algorithm based on polynomial kernels
The loss function of a polynomial kernel nonnegative matrix factorization algorithm (PNMF) is F (W, H), the optimization problem (1) is solved based on the polynomial kernel function, and the update iterative formula of the W and the H is obtained as follows:
Figure BDA0002122248720000041
Figure BDA0002122248720000042
Figure BDA0002122248720000043
wherein
Figure BDA0002122248720000044
B is a diagonal matrix having diagonal elements of
Figure BDA0002122248720000045
The disadvantages of the related technical scheme are as follows:
1. the non-negative matrix factorization algorithm is a linear algorithm, and many problems in real life are non-linear, so that satisfactory effects are difficult to achieve.
2. Most of the current kernel nonnegative matrix factorization algorithms are based on polynomial kernel functions or Gaussian kernel functions, and the two kernel functions cannot eliminate the influence of abnormal values, so that the stability of the algorithms is poor.
Disclosure of Invention
The invention provides a kernel nonnegative matrix factorization face recognition method based on an additive Gaussian kernel, which comprises a training step, wherein the training step comprises the following steps:
the first step is as follows: converting the training sample image into a training sample matrix V, and setting an error threshold value epsilon and a maximum iteration number I max And inputting a training sample matrix V, an error threshold epsilon and a maximum iteration number I max
The second step is as follows: initializing a base image matrix W and a coefficient matrix H;
the third step: setting iteration times n =0;
the fourth step: updating a base image matrix W and a coefficient matrix H according to formula (12);
the fifth step: let n = n +1;
a sixth step: judging whether the objective function F (W, H) is less than or equal to epsilon or the iteration number n reaches the maximum iteration number I max If yes, outputting a base image matrix W and a coefficient matrix H, otherwise, executing a fourth step;
in the fourth step, equation (12) is as follows:
Figure BDA0002122248720000051
Figure BDA0002122248720000052
Figure BDA0002122248720000053
in the formula (12), W represents a base image matrix, H represents a coefficient matrix, and W k Is the k-th column of W,
Figure BDA0002122248720000054
and H (t) Denotes w k And the value of the t-th iteration of H,
Figure BDA0002122248720000055
and H (t+1) Respectively represent w k And the t +1 th iteration value of H; core matrix
Figure BDA0002122248720000056
Wherein the element k ij =K(w i ,x j ) K is an additive Gaussian kernel function, w i ,x j Respectively the ith column of the base image matrix W and the jth column of the training sample matrix X; core matrix
Figure BDA0002122248720000057
Wherein the element k ij =K(w i ,w j ) K is an additive Gaussian kernel function, w i ,w j The ith and jth columns of the base image matrix W, respectively; the element in s represents the column sum of the corresponding columns of W, i.e., s =(s) lk ) And is provided with
Figure BDA0002122248720000058
Figure BDA0002122248720000059
And
Figure BDA00021222487200000510
the definition is as follows:
Figure BDA00021222487200000511
Figure BDA00021222487200000512
wherein x is i Is the ith column of the training sample matrix X,
Figure BDA00021222487200000513
is W (t) The (c) th column of (a),
Figure BDA00021222487200000514
is a matrix H (t) Of (c) is located at the element of (k, i),
Figure BDA00021222487200000515
is a matrix (HH) T ) (t) Is located at (k, i).
As a further improvement of the invention: the method for recognizing the human face by the kernel nonnegative matrix factorization further comprises a recognition step after the training step, wherein the recognition step comprises the following steps:
a seventh step of: calculating the average characteristic vector m of each class in the training sample j (j =1, \8230;, C), C is the number of different face categories, and j is the number of marks of the jth category;
the eighth step is that the face image y to be recognized is input and the characteristic vector h of the face image y is calculated y =W + y, wherein W + Moore-Penrose inverse of W;
the ninth step is that the characteristic vector h of the face image to be recognized is calculated y Mean feature vector m to class j j Distance d of j =||h y -m j || F ,j=1,…,c,||·|| F Is the Frobenius norm if h y Mean feature vector m of class p samples p Distance d of p At a minimum, i.e.
Figure BDA0002122248720000061
Classifying the face image y to be recognized into the pth class;
a tenth step of: and outputting the class P, thereby completing face recognition.
The invention also provides a device for decomposing the face by the non-negative matrix of the kernel based on the additive Gaussian kernel, which comprises a training module, wherein the training module comprises:
an input module: for converting the training sample image into a training sample matrix V, setting an error threshold value epsilon and a maximum iteration number I max And inputting a training sample matrix V, an error threshold epsilon and a maximum iteration number I max
An initialization module: the method comprises the steps of initializing a base image matrix W and a coefficient matrix H;
an assignment module: for setting the number of iterations n =0;
an update module: for updating the base image matrix W and the coefficient matrix H according to equation (6);
a counting module: let n = n +1;
a judging module: judging whether the target function F (W, H) is less than or equal to epsilon or the iteration number n reaches the maximum iteration number I max If yes, outputting a base image matrix W and a coefficient matrix H, otherwise, executing an updating module;
in the update module, equation (12) is as follows:
Figure BDA0002122248720000071
Figure BDA0002122248720000072
Figure BDA0002122248720000073
in the formula (12), W represents a base image matrix, H represents a coefficient matrix, and W k Is the k-th column of W,
Figure BDA0002122248720000074
and H (t) Denotes w k And the t-th iteration value of H,
Figure BDA0002122248720000075
and H (t+1) Respectively represents w k And the t +1 th iteration value of H; core matrix
Figure BDA0002122248720000076
Wherein the element k ij =K(w i ,x j ) K is an additive Gaussian kernel function, w i ,x j Respectively an ith column of the base image matrix W and a jth column of the training sample matrix X; core matrix
Figure BDA0002122248720000077
Wherein the element k ij =K(w i ,w j ) K is an additive Gaussian kernel function, w i ,w j The ith and jth columns of the base image matrix W, respectively; the element in s represents the column sum of the corresponding columns of W, i.e., s =(s) lk ) And is provided with
Figure BDA0002122248720000078
Figure BDA0002122248720000079
And
Figure BDA00021222487200000710
the definition is as follows:
Figure BDA00021222487200000711
Figure BDA00021222487200000712
wherein x is i Is the ith column of the training sample matrix X,
Figure BDA00021222487200000713
is W (t) The (c) th column of (a),
Figure BDA00021222487200000714
is a matrix H (t) Of (c) is located at the element of (k, i),
Figure BDA00021222487200000715
is a matrix (HH) T ) (t) Is located at (k, i).
As a further improvement of the invention: the device for recognizing the human face by the kernel nonnegative matrix factorization also comprises a recognition module which is executed after a training module, wherein the recognition module comprises:
the average feature vector calculation module: for computing training samplesAverage feature vector m of each class j (j =1, \ 8230;, c), j is the number of marks of the j-th class, and c is the number of different face classes;
the characteristic vector calculation module is used for inputting a face image y to be recognized and calculating a characteristic vector h of the face image y y =W + y, wherein W + Moore-Penrose inverse of W, W representing the base image matrix;
distance calculating module for calculating characteristic vector h of face image to be recognized y Mean feature vector m to class j j Distance d of j =||h y -m j || F ,j=1,…,c,||·|| F Is the Frobenius norm if h y Mean feature vector m with class p samples p Distance d of p At a minimum, i.e.
Figure BDA0002122248720000081
Classifying the face image y to be recognized into the pth class;
an output module: for outputting the class P, thereby completing face recognition.
The invention also discloses a computer-readable storage medium storing a computer program configured to, when invoked by a processor, implement the steps of the method of the invention.
The invention also discloses a kernel nonnegative matrix factorization face recognition system based on the additive Gaussian kernel, which comprises the following steps: memory, a processor and a computer program stored on the memory, the computer program being configured to carry out the steps of the method of the invention when called by the processor.
The invention has the beneficial effects that: the additive Gaussian kernel-based kernel nonnegative matrix decomposition face recognition method has robustness on noise and higher convergence rate, and compared with the existing correlation algorithm, the method has certain superiority and robustness.
Drawings
FIG. 1 is a flow chart of the algorithm construction process of the present invention;
FIG. 2 is a flow chart of a method of the present invention;
fig. 3 is a comparison diagram of the recognition rate of the additive gaussian kernel-based kernel nonnegative matrix factorization face recognition method and the correlation algorithm (NMF, KPCA, PNMF) on the Caltech101 face database;
fig. 4 is a comparison graph of the recognition rate of the additive gaussian kernel-based kernel nonnegative matrix factorization face recognition method and the related algorithm (NMF, KPCA, PNMF) on the Caltech101 face database added with salt and pepper noise according to the present invention;
FIG. 5 is a convergence curve diagram of the additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method of the present invention.
Detailed Description
In order to solve the problems in the background art, the invention provides a kernel function with robustness, and the kernel function is applied to an NMF algorithm to obtain a novel kernel NMF algorithm with robustness. Experimental results show that the new nuclear NMF algorithm has excellent performance.
The invention discloses a method for identifying a face by decomposing a kernel nonnegative matrix based on an additive Gaussian kernel, which mainly aims to realize the following steps:
1. the problem that the robustness of a polynomial kernel function and a traditional Gaussian kernel function is poor is solved;
2. an additive gaussian kernel function with noise immunity is constructed.
3. And constructing a kernel nonnegative matrix factorization face recognition method which can resist noise and has high recognition performance.
Keyword interpretation:
1. description of the symbols
X matrix
x j J column of matrix X
x. ^2 Squares of elements in vector x
x ij The ijth element of the matrix X
Product of corresponding elements in A [ ] B matrix A and B
Figure BDA0002122248720000091
Quotient of corresponding elements in matrices A and B
2. Non-negative Matrix Factorization (Non-negative Matrix Factorization, NMF)
The basic idea of NMF is to use a non-negative sample matrix
Figure BDA0002122248720000092
The approximate decomposition is the product of two non-negative matrices, namely:
X≈WH,
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002122248720000101
and
Figure BDA0002122248720000102
referred to as base image matrix and coefficient matrix, respectively. And, by constructing a loss function metric that measures the degree of approximation between X and WH, the loss function is typically defined based on the F-norm as:
Figure BDA0002122248720000103
3. kernel Function (Kernel Function)
Let χ be the input space, k (·,) be a symmetric function defined on χ × χ, then k is the kernel function if and only if D = { x } for arbitrary data 1 ,x 2 ,…,x n }, the kernel matrix K is always semi-positive:
Figure BDA0002122248720000104
the specific technical scheme is as follows:
in order to overcome the problem that the existing non-negative matrix factorization algorithm is poor in robustness, a novel additive Gaussian kernel function is constructed.
Definition 1: for arbitrary vectors
Figure BDA0002122248720000105
The function k is defined as:
Figure BDA0002122248720000106
then k can be proven to be a kernel function. We call this function an additive gaussian kernel function. The value of the kernel characterizes the degree of similarity between the two samples. The larger the value is, the larger the similarity is, and the smaller the similarity is.
The additive kernel is insensitive to noise, whereas the conventional Gaussian kernel is a multiplicative kernel, i.e.
Figure BDA0002122248720000107
It is not robust to noise. Therefore, the constructed additive Gaussian kernel function is used for developing a kernel nonnegative matrix decomposition face recognition algorithm, so that the influence of noise can be effectively overcome, and the robustness of the algorithm is enhanced.
1. Novel KNMF proposal
Construction of an objective function
The objective function of the new KNMF is defined as follows:
Figure BDA0002122248720000111
in order to solve two unknown non-negative matrices W and H in the objective function (2) by using the newly constructed additive gaussian kernel function, we convert the objective function into two sub-objective functions, which are:
f 1 (H) = F (W, H) wherein W is fixed
f 2 (W) = F (W, H) wherein H is fixed
Then, the problem (2) also evolves into two sub-problems, respectively:
minf 1 (H)s.t.H≥0, (3)
minf 2 (W)s.t.W≥0. (4)
learning of coefficient matrix H
For the subproblem (3), the coefficient matrix H is solved by using a gradient descent method, which includes:
Figure BDA0002122248720000112
wherein
Figure BDA0002122248720000113
Is about h k The step-size vector of (a) is,
Figure BDA0002122248720000114
is f 1 (H) About h k The gradient of (c) can be calculated as:
Figure BDA0002122248720000115
substituting equation (6) into equation (5) has
Figure BDA0002122248720000116
To ensure h k Is not negative, let:
Figure BDA0002122248720000117
thus, the step size vector is selected as:
Figure BDA0002122248720000118
will gradient
Figure BDA0002122248720000119
And step size vector
Figure BDA00021222487200001110
Substituted into equation (5) to obtain h k The update iteration formula of (c) is:
Figure BDA00021222487200001111
this updated iterative formula can be converted to a matrix form and has the following theorem.
Theorem 2: fixed matrix W, objective function f 1 (H) Is not increased, the coefficient matrix H in the current sub-problem (3) is updated in the following iterative manner:
Figure BDA0002122248720000121
learning of the base image matrix W
For sub-problem (4), matrix H is fixed, and base image matrix W is learned. We define f 2 (W) is the matrix in the objective function F (W, H) with respect to the variable W, there is
Figure BDA0002122248720000122
The gradient descent method is adopted to solve the basic image matrix W, and the method comprises the following steps:
Figure BDA0002122248720000123
wherein
Figure BDA0002122248720000124
Is about w k The step-size vector of (a) is,
Figure BDA0002122248720000125
is f 2 (W) with respect to W k The gradient of (c) can be calculated as:
Figure BDA0002122248720000126
to ensure w k We choose the step size as:
Figure BDA0002122248720000127
will gradient
Figure BDA0002122248720000128
And step size vector
Figure BDA0002122248720000129
Substituting into equation (8), the equation for w can be found k The iterative formula of (a) is:
Figure BDA00021222487200001210
and has the following theorem.
Theorem 3: fixed matrix H, objective function f 2 (W) is not incremented, and the base image matrix W in the current sub-problem (4) is updated according to the iterative formula (11).
To sum up, the updated iterative formula of the fractional power inner product kernel non-negative matrix decomposition proposed by the present patent can be obtained by theorem 1 and theorem 2, which is:
Figure BDA0002122248720000131
wherein s =(s) lk ) And is
Figure BDA0002122248720000132
Figure BDA0002122248720000133
Figure BDA0002122248720000134
2. Demonstration of convergence
The convergence of iterative equation (7) has been demonstrated in other literature, and here we mainly discuss the convergence of iterative equation (11). The definition and nature of the auxiliary function need to be utilized for this purpose:
definition 1: for arbitrary vectors w and w (t) If the condition is satisfied
G(w,w (t) ) F (w) or more, and G (w) (t) ,w (t) )=f(w (t) ),
Then call G (w, w) (t) ) An auxiliary function of the function f (w).
Introduction 1: if G (w, w) (t) ) Is a secondary function of f (w), then f (w) is monotonically non-increasing under the following update law,
Figure BDA0002122248720000135
next, we prove the validity of theorem 3 by constructing an auxiliary function, that is, prove that the new algorithm constructed by the present patent has convergence.
Theorem 4:
Figure BDA0002122248720000136
is a diagonal matrix with diagonal elements of
Figure BDA0002122248720000137
Then when σ ≧ 8, there are
Figure BDA0002122248720000141
Is f 2 (w k ) The auxiliary function of (2). Here, the
Figure BDA0002122248720000142
Expressed as:
Figure BDA0002122248720000143
and (3) proving that: it is obvious thatWhen it comes to
Figure BDA0002122248720000144
When there is
Figure BDA0002122248720000145
So we only need to prove
Figure BDA0002122248720000146
f 2 (w k ) In that
Figure BDA0002122248720000147
The second order Taylor expansion of the column is as follows:
Figure BDA0002122248720000148
here, the
Figure BDA0002122248720000149
Expressed as:
Figure BDA00021222487200001410
to prove the inequality
Figure BDA00021222487200001411
This is equivalent to proving that:
Figure BDA00021222487200001412
i.e. we only need to prove the matrix
Figure BDA00021222487200001413
Is semi-positive for
Figure BDA00021222487200001414
All have:
Figure BDA0002122248720000151
it is obvious that
Figure BDA0002122248720000152
From w ai ∈[0,1]We have:
Figure BDA0002122248720000153
so only need to prove:
Figure BDA0002122248720000154
namely, the following results are proved:
Figure BDA0002122248720000155
here also have
Figure BDA0002122248720000161
And because of
Figure BDA0002122248720000162
Therefore is provided with
Figure BDA0002122248720000163
Namely, it is
Figure BDA0002122248720000164
Is easy to know
Figure BDA0002122248720000165
At R + The number of the bits is increased in a monotonous manner,
Figure BDA0002122248720000166
monotonically decreasing, when a =8,
Figure BDA0002122248720000167
therefore, the matrix M can be ensured to be semi-positive as long as the sigma is more than or equal to 8.
In summary, we demonstrate the inequality
Figure BDA0002122248720000168
From definition 1 and lemma 1, we know the function
Figure BDA0002122248720000169
Is a function f 2 (w k ) An upper limit of (2), and
Figure BDA00021222487200001610
to obtain
Figure BDA00021222487200001611
We solve its derivative and make it 0, have
Figure BDA00021222487200001612
Then it is possible to obtain:
Figure BDA00021222487200001613
wherein:
Figure BDA0002122248720000171
from the nature of the helper function, our algorithm is convergent.
3. Feature extraction
Assuming y is a test sample, the non-linear mapping φ maps it into feature space, and φ (y) can be expressed as a linear combination of the column vectors of the mapped base image matrix φ (W), as:
φ(y)=φ(W)h y
wherein h is y Is the feature vector of phi (y). Upper type two-side ride-by-ride phi (W) Τ Is obtained by
φ(W) Τ φ(y)=φ(W) Τ φ(W)h y
That is to say that the first and second electrodes,
K Wy =K WW h y ,
wherein K is Wy Is a kernel vector. Thus, characteristic h y Can be found as
Figure BDA0002122248720000172
Wherein the content of the first and second substances,
Figure BDA0002122248720000173
is a matrix K WW The generalized inverse of (1). Similarly, we can get the average feature vector of the training samples. Suppose there are c types of samples in the original space, where the training sample number of the j type is n j (j =1,2, \8230;, c), the training sample matrix is X j Then the average feature vector of class j can be expressed as:
Figure BDA0002122248720000174
wherein the content of the first and second substances,
Figure BDA0002122248720000175
is a dimension n j A full column vector of x 1 dimension.
In summary, the method for identifying the face by the additive Gaussian kernel-based non-negative matrix factorization specifically comprises the following construction processes:
1. introducing an additive Gaussian kernel function with stronger robustness, which is constructed by us, into the algorithm of the patent;
2. an updated iterative formula of the patent algorithm is deduced by using a gradient descent method;
3. the convergence of the algorithm of the patent is proved by constructing the auxiliary function, and the rationality of the algorithm is theoretically ensured.
As shown in fig. 2, the present invention provides a method for identifying a face based on additive gaussian kernel non-negative matrix factorization, which comprises a training step, wherein the training step comprises the following steps:
the first step is as follows: converting the training sample image into a training sample matrix V, and setting an error threshold value epsilon and a maximum iteration number I max And inputting a training sample matrix V, an error threshold epsilon and a maximum iteration number I max
The second step is as follows: initializing a base image matrix W and a coefficient matrix H;
the third step: setting the iteration number n =0;
the fourth step: updating the base image matrix W and the coefficient matrix H according to formula (12);
the fifth step: let n = n +1;
a sixth step: judging whether the objective function F (W, H) is less than or equal to epsilon or the iteration number n reaches the maximum iteration number I max If yes, outputting a base image matrix W and a coefficient matrix H, otherwise, executing a fourth step;
in the fourth step, equation (12) is as follows:
Figure BDA0002122248720000181
in the formula (12), W represents a base image matrix, H represents a coefficient matrix, and W k Is the k-th column of W,
Figure BDA0002122248720000182
and H (t) Denotes w k And the value of the t-th iteration of H,
Figure BDA0002122248720000183
and H (t+1) Respectively represents w k And the t +1 th iteration value of H; core matrix
Figure BDA0002122248720000184
Wherein the element k ij =K(w i ,x j ) K is an additive Gaussian kernel function, w i ,x j Respectively the ith column of the base image matrix W and the jth column of the training sample matrix X; core matrix
Figure BDA0002122248720000185
Wherein the element k ij =K(w i ,w j ) K is an additive Gaussian kernel function, w i ,w j The ith and jth columns of the base image matrix W, respectively; the element in s represents the column sum of the corresponding columns of W, i.e., s =(s) lk ) And is
Figure BDA0002122248720000186
Figure BDA0002122248720000187
And
Figure BDA0002122248720000188
the definition is as follows:
Figure BDA0002122248720000191
Figure BDA0002122248720000192
wherein x is i Is the ith column of the training sample matrix X,
Figure BDA0002122248720000193
is W (t) The (c) th column of (a),
Figure BDA0002122248720000194
is a matrix H (t) Of (c) is located at the element of (k, i),
Figure BDA0002122248720000195
is a matrix (HH) T ) (t) Is located at (k, i).
The method for recognizing the human face by the kernel nonnegative matrix factorization further comprises a recognition step after the training step, wherein the recognition step comprises the following steps of:
a seventh step of: calculating the average characteristic vector m of each class in the training sample j (j =1, \8230;, c), j is the mark number of the j-th class, and c is the number of different face classes;
the eighth step is to input the face image y to be recognized and calculate the characteristic vector h thereof y =W + y, wherein W + Moore-Penrose inverse of W, W representing the base image matrix;
the ninth step is that the characteristic vector h of the face image to be recognized is calculated y Mean feature vector m to class j j Distance d of j =||h y -m j || F ,j=1,…,c,||·|| F Is the Frobenius norm if h y Mean feature vector m with class p samples p Distance d of p At a minimum, i.e.
Figure BDA0002122248720000196
Classifying the face image y to be recognized into the pth class;
a tenth step of: and outputting the class P, thereby completing face recognition.
And outputting the type P, namely the face image y to be recognized is recognized as the No. P personal face type, so that after the type P is output, the face recognition is finished.
The invention also provides a device for decomposing the face by the non-negative matrix of the kernel based on the additive Gaussian kernel, which comprises a training module, wherein the training module comprises:
an input module: for converting training sample images into a training sample matrix V, setting an error threshold value epsilon and a maximum iteration number I max And inputting a training sample matrix V, an error threshold epsilon and a maximum iteration number I max
An initialization module: the method comprises the steps of initializing a base image matrix W and a coefficient matrix H;
an assignment module: for setting the number of iterations n =0;
an update module: for updating the base image matrix W and the coefficient matrix H according to equation (12);
a counting module: let n = n +1;
a judging module: judging whether the objective function F (W, H) is less than or equal to epsilon or the iteration number n reaches the maximum iteration number I max If yes, outputting a base image matrix W and a coefficient matrix H, otherwise, executing an updating module;
in the update module, equation (12) is as follows:
Figure BDA0002122248720000201
Figure BDA0002122248720000202
Figure BDA0002122248720000203
in the formula (12), W represents a base image matrix, H represents a coefficient matrix, and W k Is the k-th column of W,
Figure BDA0002122248720000204
and H (t) Denotes w k And the value of the t-th iteration of H,
Figure BDA0002122248720000205
and H (t+1) Respectively represents w k And the t +1 th iteration value of H; core matrix
Figure BDA0002122248720000206
Wherein the element k ij =K(w i ,x j ) K is an additive Gaussian kernel function, w i ,x j Respectively the ith column of the base image matrix W and the jth column of the training sample matrix X; core matrix
Figure BDA0002122248720000207
Wherein the element k ij =K(w i ,w j ) K is an additive Gaussian kernel function, w i ,w j The ith and jth columns of the base image matrix W, respectively; the element in s represents the column sum of the corresponding columns of W, i.e., s =(s) lk ) And is provided with
Figure BDA0002122248720000208
Figure BDA0002122248720000209
And
Figure BDA00021222487200002010
the definition is as follows:
Figure BDA0002122248720000211
Figure BDA0002122248720000212
wherein x is i Is the ith column of the training sample matrix X,
Figure BDA0002122248720000213
is W (t) The (c) th column of (a),
Figure BDA0002122248720000214
is a matrix H (t) Of (c) is located at the element of (k, i),
Figure BDA0002122248720000215
is a matrix (HH) T ) (t) Is located at (k, i).
The device for recognizing the human face by the kernel nonnegative matrix factorization also comprises a recognition module which is executed after a training module, wherein the recognition module comprises:
the average feature vector calculation module: for calculating the average feature vector m of each class in the training sample j (j =1, \ 8230;, c), j being the number of the j-th class, c being a different personNumber of face categories;
the characteristic vector calculation module is used for inputting the face image y to be recognized and calculating the characteristic vector h thereof y = W + y, wherein W + Moore-Penrose inverse of W, W representing the base image matrix;
distance calculating module for calculating characteristic vector h of face image to be recognized y Mean feature vector m to class j j Distance d of j =||h y -m j || F ,j=1,…,c,||·|| F Is the Frobenius norm if h y Mean feature vector m with class p samples p Distance d of p At a minimum, i.e.
Figure BDA0002122248720000216
Classifying the face image y to be recognized into the pth class;
an output module: for outputting the class P, thereby completing face recognition.
The invention also discloses a computer-readable storage medium storing a computer program configured to, when invoked by a processor, implement the steps of the method of the invention.
The invention also discloses a kernel nonnegative matrix factorization face recognition system based on the additive Gaussian kernel, which comprises the following steps: memory, a processor and a computer program stored on the memory, the computer program being configured to carry out the steps of the method of the invention when called by the processor.
Table 1 shows the recognition rate (%) comparison (TN represents the number of training samples of each class) of the additive Gaussian kernel-based nuclear nonnegative matrix factorization (Our Method) and Nonnegative Matrix Factorization (NMF), nuclear principal component analysis (KPCA) and polynomial nuclear nonnegative matrix factorization (PNMF) on the Caltech101 face database
TN 4 5 6 7 8 9 10
NMF 70.00 74.01 76.45 78.66 79.74 79.65 84.01
KPCA 59.36 63.77 65.18 66.27 68.16 70.94 72.04
PNMF 65.56 68.58 73.29 75.36 75.26 79.01 78.62
Our Method 73.23 75.83 79.21 80.10 82.00 83.80 85.39
TABLE 1
Table 2 shows that the additive Gaussian kernel-based nuclear nonnegative matrix factorization (Our Method) and Nonnegative Matrix Factorization (NMF), the Kernel Principal Component Analysis (KPCA) and the polynomial nuclear nonnegative matrix factorization (PNMF) provided by the invention are applied to the Caltech101 face database added with salt and pepper noise
Identification ratio (%) comparison (d represents the noise density of salt and pepper)
d 0 0.05 0.1 0.15 0.2 0.25 0.3
NMF 79.42 80.00 78.95 77.66 75.56 72.63 70.88
KPCA 70.41 69.30 71.05 67.02 62.05 59.65 54.09
PNMF 75.26 76.96 76.96 73.10 72.98 71.99 68.01
Our Method 81.93 82.11 82.92 81.75 79.77 79.24 77.36
TABLE 2
The invention has the beneficial effects that:
1. and obtaining a kernel nonnegative matrix factorization algorithm with noise resistance through the constructed additive Gaussian kernel function with noise resistance. Experimental results show that the algorithm has robustness to noise.
2. The convergence of the algorithm provided by the invention is not only theoretically proved by using the auxiliary function, but also verified in experiments, and the algorithm has higher convergence speed.
3. The results of experiments and comparisons of the algorithm and related algorithms in the public face database show that the algorithm developed by the patent has certain superiority.
4. The results of experimental comparison of the face database added with noise and a related algorithm show that the algorithm developed by the invention has good robustness.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (4)

1. A kernel nonnegative matrix factorization face recognition method based on an additive Gaussian kernel is characterized by comprising a training step, wherein the training step comprises the following steps:
the first step is as follows: converting the training sample image into a training sample matrix X, and setting an error threshold epsilon and a maximum iteration number I max And inputting a training sample matrix X, an error threshold epsilon and a maximum iteration number I max
The second step is as follows: initializing a base image matrix W and a coefficient matrix H;
the third step: setting the iteration number N =0;
the fourth step: updating the base image matrix W and the coefficient matrix H according to formula (12);
the fifth step: let N = N +1;
a sixth step: judging whether the objective function F (W, H) is less than or equal to epsilon or the iteration number N reaches the maximum iteration number I max If yes, outputting a base image matrix W and a coefficient matrix H, otherwise, executing a fourth step; in the fourth step, equation (12) is as follows:
Figure FDA0003856131150000011
Figure FDA0003856131150000012
Figure FDA0003856131150000013
in the formula (12), W represents a base image matrix, H represents a coefficient matrix, and W k Is the k-th column of W,
Figure FDA0003856131150000014
and H (t) Denotes w k And the t-th iteration value of H,
Figure FDA0003856131150000015
and H (t+1) Respectively represent w k And the t +1 th iteration value of H; core matrix
Figure FDA0003856131150000016
Wherein the element k ij =K(w i ,x j ) K is an additive Gaussian kernel function, w i ,x j Are respectively a baseThe ith column of the image matrix W and the jth column of the training sample matrix X; core matrix
Figure FDA0003856131150000017
Wherein the element k ij =K(w i ,w j ) K is an additive Gaussian kernel function, w i ,w j The ith and jth columns of the base image matrix W, respectively; the element in s represents the column sum of the corresponding columns of W, i.e., s =(s) lk ) And is
Figure FDA0003856131150000021
Figure FDA0003856131150000022
And
Figure FDA0003856131150000023
the definition is as follows:
Figure FDA0003856131150000024
Figure FDA0003856131150000025
wherein x is i Is the ith column, w, of the training sample matrix X k (t) Is W (t) The (c) th column of (a),
Figure FDA0003856131150000026
is a matrix H (t) Of (c) is located at the element of (k, i),
Figure FDA0003856131150000027
is a matrix (HH) T ) (t) The element located at (k, i);
the method for recognizing the human face by the kernel nonnegative matrix factorization further comprises a recognition step after the training step, wherein the recognition step comprises the following steps of:
a seventh step of: calculating the average characteristic vector m of each class in the training sample j (j =1, \8230;, c), c is the number of different face categories, and j is the number of marks of the j-th category; suppose there are c types of samples in the original space, where the training sample number of the j type is n j (j =1,2, \8230;, c), the training sample matrix is X j Then the average feature vector of class j is represented as:
Figure FDA0003856131150000028
wherein the content of the first and second substances,
Figure FDA0003856131150000029
is one dimension n j A full column vector of x 1 dimensions;
the eighth step is that the face image y to be recognized is input and the characteristic vector h of the face image y is calculated y =W + y, wherein W + Moore-Penrose inverse of W, W representing the base image matrix;
the ninth step is that the characteristic vector h of the face image to be recognized is calculated y Mean feature vector m to class j j Distance d of j =||h y -m j || F ,j=1,…,c,||·|| F Is the Frobenius norm if h y Mean feature vector m with class p samples p Distance d of p At a minimum, i.e.
Figure FDA0003856131150000031
Classifying the face image y to be recognized into the pth class;
a tenth step: and outputting the class P, thereby completing the face recognition.
2. A kernel nonnegative matrix factorization face recognition device based on an additive Gaussian kernel is characterized by comprising a training module, wherein the training module comprises:
an input module: for converting training sample image into training sample matrix X, setting error threshold epsilon and maximum iteration number I max And inputting training samplesThe matrix X, the error threshold epsilon and the maximum iteration number I max
An initialization module: the method comprises the steps of initializing a base image matrix W and a coefficient matrix H;
an assignment module: for setting the number of iterations N =0;
an update module: for updating the base image matrix W and the coefficient matrix H according to equation (12);
a counting module: let N = N +1;
a judging module: judging whether the objective function F (W, H) is less than or equal to epsilon or the iteration number N reaches the maximum iteration number I max If yes, outputting a base image matrix W and a coefficient matrix H, otherwise, executing an updating module; in the update module, equation (12) is as follows:
Figure FDA0003856131150000032
Figure FDA0003856131150000033
Figure FDA0003856131150000034
in the formula (12), W represents a base image matrix, H represents a coefficient matrix, and W k Is the k-th column of W,
Figure FDA0003856131150000035
and H (t) Denotes w k And the value of the t-th iteration of H,
Figure FDA0003856131150000036
and H (t+1) Respectively represents w k And the t +1 th iteration value of H; core matrix
Figure FDA0003856131150000037
Wherein the element k ij =K(w i ,x j ) K is an additive Gaussian kernel function, w i ,x j Respectively the ith column of the base image matrix W and the jth column of the training sample matrix X; core matrix
Figure FDA0003856131150000041
Wherein the element k ij =K(w i ,w j ) K is an additive Gaussian kernel function, w i ,w j The ith and jth columns of the base image matrix W, respectively; the element in s represents the column sum of the corresponding columns of W, i.e., s =(s) lk ) And is
Figure FDA0003856131150000042
Figure FDA0003856131150000043
And
Figure FDA0003856131150000044
the definition is as follows:
Figure FDA0003856131150000045
Figure FDA0003856131150000046
wherein x is i Is the ith column, w, of the training sample matrix X k (t) Is W (t) The (c) th column of (a),
Figure FDA0003856131150000047
is a matrix H (t) Of (c) is located at the element of (k, i),
Figure FDA0003856131150000048
is a matrix (HH) T ) (t) The element located at (k, i);
the device for recognizing the human face by the kernel nonnegative matrix factorization also comprises a recognition module which is executed after a training module, wherein the recognition module comprises:
an average feature vector calculation module: for calculating an average feature vector m for each class in a training sample j (j =1, \ 8230;, c), c is the number of different face categories, and j is the number of marks of the jth category; suppose there are c samples in the original space, where the training sample number of the j class is n j (j =1,2, \8230;, c), the training sample matrix is X j Then the average feature vector of class j is represented as:
Figure FDA0003856131150000049
wherein the content of the first and second substances,
Figure FDA00038561311500000410
is one dimension n j A full column vector of x 1 dimensions;
the characteristic vector calculation module is used for inputting the face image y to be recognized and calculating the characteristic vector h thereof y =W + y, wherein W + Moore-Penrose inverse of W, W representing the base image matrix;
distance calculating module for calculating characteristic vector h of face image to be recognized y Mean feature vector m to class j j Distance d of j =||h y -m j || F ,j=1,…,c,||·|| F Is the Frobenius norm if h y Mean feature vector m with class p samples p Distance d of p At a minimum, i.e.
Figure FDA0003856131150000051
Classifying the face image y to be recognized into the pth class;
an output module: for outputting the class P, thereby completing face recognition.
3. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program configured to, when invoked by a processor, implement the steps of the method of claim 1.
4. A kernel nonnegative matrix factorization face recognition system based on an additive Gaussian kernel is characterized by comprising: memory, a processor and a computer program stored on the memory, the computer program being configured to carry out the steps of the method of claim 1 when invoked by the processor.
CN201910610652.XA 2019-07-08 2019-07-08 Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium Active CN110378262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910610652.XA CN110378262B (en) 2019-07-08 2019-07-08 Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910610652.XA CN110378262B (en) 2019-07-08 2019-07-08 Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium

Publications (2)

Publication Number Publication Date
CN110378262A CN110378262A (en) 2019-10-25
CN110378262B true CN110378262B (en) 2022-12-13

Family

ID=68252338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910610652.XA Active CN110378262B (en) 2019-07-08 2019-07-08 Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium

Country Status (1)

Country Link
CN (1) CN110378262B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126169B (en) * 2019-12-03 2022-08-30 重庆邮电大学 Face recognition method and system based on orthogonalization graph regular nonnegative matrix factorization

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095835A (en) * 2014-05-12 2015-11-25 比亚迪股份有限公司 Pedestrian detection method and system
CN106169073A (en) * 2016-07-11 2016-11-30 北京科技大学 A kind of expression recognition method and system
CN107330364A (en) * 2017-05-27 2017-11-07 上海交通大学 A kind of people counting method and system based on cGAN networks
CN107480636A (en) * 2017-08-15 2017-12-15 深圳大学 Face identification method, system and storage medium based on core Non-negative Matrix Factorization
CN107784293A (en) * 2017-11-13 2018-03-09 中国矿业大学(北京) A kind of Human bodys' response method classified based on global characteristics and rarefaction representation
WO2018126638A1 (en) * 2017-01-03 2018-07-12 京东方科技集团股份有限公司 Method and device for detecting feature point in image, and computer-readable storage medium
CN109508697A (en) * 2018-12-14 2019-03-22 深圳大学 Face identification method, system and the storage medium of half Non-negative Matrix Factorization based on E auxiliary function

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7353215B2 (en) * 2001-05-07 2008-04-01 Health Discovery Corporation Kernels and methods for selecting kernels for use in learning machines
US10235600B2 (en) * 2015-06-22 2019-03-19 The Johns Hopkins University System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095835A (en) * 2014-05-12 2015-11-25 比亚迪股份有限公司 Pedestrian detection method and system
CN106169073A (en) * 2016-07-11 2016-11-30 北京科技大学 A kind of expression recognition method and system
WO2018126638A1 (en) * 2017-01-03 2018-07-12 京东方科技集团股份有限公司 Method and device for detecting feature point in image, and computer-readable storage medium
CN107330364A (en) * 2017-05-27 2017-11-07 上海交通大学 A kind of people counting method and system based on cGAN networks
CN107480636A (en) * 2017-08-15 2017-12-15 深圳大学 Face identification method, system and storage medium based on core Non-negative Matrix Factorization
CN107784293A (en) * 2017-11-13 2018-03-09 中国矿业大学(北京) A kind of Human bodys' response method classified based on global characteristics and rarefaction representation
CN109508697A (en) * 2018-12-14 2019-03-22 深圳大学 Face identification method, system and the storage medium of half Non-negative Matrix Factorization based on E auxiliary function

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Novel Enhanced Nonnegative Feature Extraction Approach;Haitao Chen,et al.;《2018 14th International Conference on Computational Intelligence and Security》;20181231;全文 *
Human Microbe-Disease Association Prediction With Graph Regularized Non-Negative Matrix Factorization;He BinSheng;《FRONTIERS IN MICROBIOLOGY》;20181101;全文 *
KERNEL NONNEGATIVE MATRIX FACTORIZATION WITH RBF KERNEL FUNCTION FOR FACE RECOGNITION;WEN-SHENG CHEN,et al.;《Proceedings of the 2017 International Conference on Machine Learning and Cybernetics》;20171116;全文 *
基于显著性检测与HOG-NMF特征的快速行人检测方法;孙锐等;《电子与信息学报》;20130815(第08期);全文 *
基于非负矩阵分解的人脸识别算法研究;李育高;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20170715;全文 *

Also Published As

Publication number Publication date
CN110378262A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN109063698B (en) Non-negative feature extraction and face recognition application method, system and storage medium
Brbić et al. Multi-view low-rank sparse subspace clustering
Zhang et al. Robust low-rank kernel multi-view subspace clustering based on the schatten p-norm and correntropy
Patel et al. Latent space sparse and low-rank subspace clustering
Zadrozny Reducing multiclass to binary by coupling probability estimates
Zhang et al. Unsupervised nonnegative adaptive feature extraction for data representation
Baldi et al. Complex-valued autoencoders
Pei et al. Concept factorization with adaptive neighbors for document clustering
CN110070028B (en) Method, system and storage medium for representing and identifying non-negative features of face image based on conjugate gradient method
CN109508697B (en) Face recognition method, system and storage medium based on semi-nonnegative matrix factorization of E auxiliary function
WO2020010602A1 (en) Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
Shrivastava et al. Learning discriminative dictionaries with partially labeled data
CN109002794B (en) Nonlinear non-negative matrix factorization face recognition construction method, system and storage medium
Zhang et al. Flexible auto-weighted local-coordinate concept factorization: A robust framework for unsupervised clustering
Zhao et al. Soft label based linear discriminant analysis for image recognition and retrieval
CN109063555B (en) Multi-pose face recognition method based on low-rank decomposition and sparse representation residual error comparison
WO2021003637A1 (en) Kernel non-negative matrix factorization face recognition method, device and system based on additive gaussian kernel, and storage medium
Khouja et al. Tensor decomposition for learning Gaussian mixtures from moments
CN110378262B (en) Additive Gaussian kernel based kernel nonnegative matrix factorization face recognition method, device and system and storage medium
WO2020118708A1 (en) E auxiliary function based semi-non-negative matrix factorization facial recognition method and system, and storage medium
Xu et al. Graphical lasso quadratic discriminant function and its application to character recognition
Sicks et al. A generalised linear model framework for β-variational autoencoders based on exponential dispersion families
Escolano et al. From points to nodes: Inverse graph embedding through a lagrangian formulation
Liu et al. Deep nonparametric estimation of intrinsic data structures by chart autoencoders: Generalization error and robustness
CN110705368B (en) Method, device and system for representing and identifying non-negative characteristics of face data in self-constructed cosine kernel space and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant