WO2020010602A1 - 一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质 - Google Patents

一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质 Download PDF

Info

Publication number
WO2020010602A1
WO2020010602A1 PCT/CN2018/095554 CN2018095554W WO2020010602A1 WO 2020010602 A1 WO2020010602 A1 WO 2020010602A1 CN 2018095554 W CN2018095554 W CN 2018095554W WO 2020010602 A1 WO2020010602 A1 WO 2020010602A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
negative
function
kernel
face recognition
Prior art date
Application number
PCT/CN2018/095554
Other languages
English (en)
French (fr)
Inventor
陈文胜
刘敬敏
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2018/095554 priority Critical patent/WO2020010602A1/zh
Publication of WO2020010602A1 publication Critical patent/WO2020010602A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to the technical field of data processing, and in particular, to a method, system, and storage medium for constructing face recognition based on non-linear non-negative matrix factorization.
  • biometric technology that uses the inherent physiological and behavioral characteristics of the human body for personal identification has become one of the most active research fields.
  • face recognition is non-invasive, non-mandatory, non-contact And concurrency.
  • Face recognition technology includes two stages.
  • the first stage is feature extraction, that is, the extraction of facial feature information in the face image. This stage directly determines the quality of face recognition technology.
  • the second stage is identification. Personal identification is performed based on the extracted feature information.
  • Principal component analysis (PCA) and singular value decomposition (SVD) are more classic feature extraction methods, but the feature vectors proposed by these two methods usually contain negative elements, so these methods do not have the original sample with non-negative data. Rationality and interpretability.
  • Non-negative matrix factorization is a feature extraction method for processing non-negative data. It is widely used in applications such as hyperspectral data processing and face image recognition.
  • the NMF algorithm has non-negative restrictions on the extracted features during the non-negative data matrix decomposition of the original sample, that is, all components after the decomposition are non-negative, so non-negative sparse features can be extracted.
  • the essence of the NMF algorithm is to approximately decompose the non-negative matrix X into the product of the base image matrix W and the coefficient matrix H, that is, X ⁇ WH, and W and H are both non-negative matrices.
  • each column of the matrix X can be represented as a non-negative linear combination of the matrix W column vectors, which also conforms to the construction basis of the NMF algorithm-the perception of the whole is composed of the perception of the parts that make up the whole (pure additive) .
  • NMF local NMF algorithm
  • DNMF discriminative NMF algorithm
  • SNMF symmetric NMF algorithm
  • the kernel method is an effective method. It extends the linear algorithm to a non-linear algorithm and provides a beautiful theoretical framework.
  • the basic idea of the kernel method is to use a non-linear mapping function to map the original data into a high-dimensional feature space, so that the mapped data is linearly separable, and then apply a linear algorithm to the mapped data.
  • the kernel method the most critical part is the use of kernel techniques. By using the kernel function to replace the inner product of the mapped data, there is no need to know the specific analytical formula of the non-linear mapping function.
  • the use of nuclear techniques reduces the difficulty of extending the mapping to the functional space, the regenerating nuclear Hilbert space (RKHS).
  • NMF nonlinear NMF algorithm
  • NLNMF algorithms include polynomial kernel non-negative matrix factorization (PNMF) and Gaussian kernel non-negative matrix factorization (RBFNMF), and their loss functions are constructed based on the square of the F-norm.
  • PNMF polynomial kernel non-negative matrix factorization
  • RBFNMF Gaussian kernel non-negative matrix factorization
  • ⁇ x 1 , x 2 , ..., x n ⁇ be a set of data in the original sample space.
  • the main idea of the kernel method is to map the samples from the original space to a higher-dimensional feature space through a non-linear mapping function ⁇ ( ⁇ ), so that the samples are linearly separable in this feature space, and as long as the original space is finite-dimensional , Then there must be such a high-dimensional feature space.
  • non-linear mapping function
  • the inner product of x i and x j in the feature space can be calculated by using their function k ( ⁇ , ⁇ ) in the original sample space. This not only solves these problems, but also simplifies the calculation process.
  • NLNMF The main purpose of NLNMF is to use kernel methods to solve the application of NMF in nonlinear problems.
  • the NMF algorithm is used to process the mapped data in the high-dimensional feature space, and ⁇ (X) is approximately decomposed into the product of two matrices ⁇ (W) and H, that
  • the kernel function k ( ⁇ , ⁇ ) implicitly defines a high-dimensional feature space. If the kernel function is not selected properly, it means that the sample data is mapped to an inappropriate feature Space is likely to cause poor performance.
  • Another major factor is the construction of the loss function F (W, H).
  • the loss function determines the accuracy of the NLNMF algorithm to a certain extent. Different loss functions have different emphasis, and not all loss functions can be used in the feature space. Therefore, the selection of the loss function is also important.
  • the commonly used loss function is F F (W, H) constructed according to the F-norm, that is,
  • Nonlinear non-negative matrix factorization algorithm PNMF based on polynomial kernel
  • the loss function of the polynomial kernel non-negative matrix factorization algorithm is F F (W, H). It solves the optimization problem (1) based on the polynomial kernel function.
  • the updated iterative formulas for W and H are:
  • is a diagonal matrix whose diagonal elements are
  • the loss functions of current non-linear non-negative matrix factorization algorithms are all based on the F-norm.
  • the F-norm has two major shortcomings, namely its sensitivity to outliers and poor sparsity. This makes the algorithm Has poor stability and cannot well resist changes in pose and illumination in face recognition;
  • the power exponent of the polynomial kernel function in polynomial kernel non-negative matrix factorization (PNMF) can only be an integer, when the power exponent is a fraction There is no guarantee that it is still a kernel function, and the discriminative ability of the kernel function is weakened when the power exponent is an integer.
  • the invention provides a method for constructing face recognition based on nonlinear non-negative matrix factorization, which includes the following steps:
  • Loss characterization step use the l 2, p -norm of the matrix to characterize the loss degree after matrix decomposition;
  • Sparsity enhancement step Use the l 1 -norm of the matrix to enhance the sparse representation of features, and add a regular term about the matrix H to the loss function;
  • Steps to obtain the updated iterative formula for the non-negative matrix factorization of the fractional power inner product kernel Use the fractional power inner product kernel function to form the optimization problem to be solved. Solve H by using the gradient descent method and W by the exponential gradient descent method, so that An updated iterative formula for the fractional power inner product kernel non-negative matrix factorization is obtained.
  • the construction method further includes a convergence verification step.
  • the convergence verification step the convergence of the algorithm is proved by constructing an auxiliary function.
  • the updated iterative formula of the fractional power inner product kernel non-negative matrix factorization is:
  • the non-negative non-negative matrix factorization face recognition method provided by the present invention further includes a training step.
  • the training step includes:
  • the first step transform the training sample image into a training sample matrix X, set an error threshold ⁇ , a maximum number of iterations I max , and initialize the base image matrix W and the coefficient matrix H;
  • the second step update the iteration formula of the non-negative matrix factorization of the inner product kernel of fractional powers to update W and H;
  • the non-linear non-negative matrix factorization face recognition method further includes performing a testing step after the training step, the testing step includes:
  • Step Six If Then the sample y is classified into the l class.
  • the updated iteration formula of the non-negative matrix factorization of the fractional power inner product kernel in the training step is:
  • the present invention also provides a non-linear non-negative matrix factorization face recognition system, which includes a memory, a processor, and a computer program stored on the memory.
  • the computer program is configured to implement all functions when called by the processor. The steps of the method are described.
  • the invention also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program configured to implement the steps of the method when called by a processor.
  • the beneficial effects of the present invention are: 1. By using 1, 2, p -norm instead of F-norm to measure the loss of matrix factorization, the problem of singular value sensitivity in the existing kernel non-negative matrix factorization algorithm is solved, and the enhancement The stability of the kernel non-negative matrix factorization algorithm proposed by the present invention; 2. The sparseness of the feature representation of the algorithm of the present invention is enhanced by adding a regularization restriction on the coefficient matrix H to the objective function; 3.
  • the kernel non-negative matrix factorization with a new objective function is integrated with the fractional power inner product kernel function to obtain a sparse fractional inner product nonlinear non-negative matrix factorization algorithm with efficient recognition performance, which solves the hyperparameters of the polynomial kernel function Problems that can only be integers improve the discriminative ability of the algorithm.
  • FIG. 1 is a flowchart of an algorithm construction process of the present invention
  • FIG. 3 is a comparison diagram of the recognition rate of the proposed algorithm and related algorithms (KLPP, LNMF, PNMF) on the CMU PIE face database;
  • FIG. 4 is a convergence curve diagram of the algorithm of the present invention.
  • the present invention discloses a method for constructing non-linear non-negative matrix factorization face recognition, and specifically discloses a method for constructing non-linear non-negative matrix factorization face recognition based on l 2, p modules. .
  • the method for constructing non-linear non-negative matrix factorization face recognition includes the following steps:
  • Loss characterization step use the l 2, p -norm of the matrix to characterize the loss degree after matrix decomposition;
  • Sparsity enhancement step Use the l 1 -norm of the matrix to enhance the sparse representation of features, and add a regular term about the matrix H to the loss function;
  • Steps to obtain the updated iterative formula for the non-negative matrix factorization of the fractional power inner product kernel Use the fractional power inner product kernel function to form the optimization problem to be solved. Solve H by using the gradient descent method and W by the exponential gradient descent method, An updated iterative formula for the fractional power inner product kernel non-negative matrix factorization is obtained.
  • the construction method also includes a convergence verification step.
  • the convergence verification step the convergence of the algorithm is proved by constructing an auxiliary function.
  • the present invention discloses a non-linear non-negative matrix factorization face recognition method, which includes a training step.
  • the training step includes:
  • the first step transform the training sample image into a training sample matrix X, set an error threshold ⁇ , a maximum number of iterations I max , and initialize the base image matrix W and the coefficient matrix H;
  • the second step update the iteration formula of the non-negative matrix factorization of the inner product kernel of fractional powers to update W and H;
  • the nonlinear non-negative matrix factorization face recognition method further includes performing a testing step after the training step.
  • the testing step includes:
  • Step Six If Then the sample y is classified into the l class.
  • the updated iterative formula of the non-negative matrix factorization of the fractional power inner product kernel in the training step of the non-linear non-negative matrix factorization face recognition method is:
  • the invention also discloses a non-linear non-negative matrix factorization face recognition system, which includes: a memory, a processor, and a computer program stored on the memory.
  • the computer program is configured to implement all functions when called by the processor. The steps of the method are described.
  • the invention also discloses a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program configured to implement the steps of the method when called by a processor.
  • Keyword interpretation (note: roughly explain the concepts of some key words related to the present invention)
  • l 2 p -norm contains F-norm.
  • NMF Non-negative Matrix Factorization
  • the base image matrix and the coefficient matrix, respectively.
  • the loss function is defined based on the F-norm, as:
  • be the input space
  • k ( ⁇ , ⁇ ) be a symmetric function defined on ⁇ ⁇ ⁇
  • the kernel matrix K is always positive semidefinite:
  • the present invention introduces a fractional power inner product kernel function. Assume Is an m-dimensional column vector, and is defined in the present invention d> 0.
  • Theorem 1 For arbitrary vectors And positive real numbers The function k is defined as:
  • k is a kernel function. We call this function a fractional power inner product kernel function.
  • the present invention uses l 2, p -norm (0 ⁇ p ⁇ 2) to characterize the loss function, that is:
  • ⁇ 0 is a regular term parameter
  • problem (2) also evolved into two sub-problems, namely:
  • the gradient descent method is used to solve the coefficient matrix H, which includes:
  • the selection step vector is:
  • This updated iterative formula can be transformed into a matrix form with the following theorem.
  • Theorem 2 Fixed matrix W.
  • the objective function f 1 (H) is non-increasing.
  • the coefficient matrix H in subproblem (3) is updated in the following iterative manner:
  • Non-linear generalized exponential gradient descent methods using gradient descent methods are:
  • the selection step size is:
  • Extending it to the matrix form can get the updated iterative formula of W, such as the following theorem.
  • Theorem 3 The fixed matrix H, the objective function f 2 (H) is non-increasing.
  • the base image matrix W in the subproblem (4) is updated in the following iterative manner:
  • Definition 1 For any matrices H and H (t) , if the conditions are met
  • G (H, H (t) ) is called an auxiliary function of function f (H).
  • Theorem 4 Is a diagonal matrix with diagonal elements as Then
  • G 1 (H, H (t) ) is the auxiliary function of f 1 (H), which is proved.
  • Theorem 5 Let Is a symmetric matrix with elements
  • maps it into the feature space
  • ⁇ (y) can be expressed as a linear combination of the column vectors of the mapped base image matrix ⁇ (W) as
  • the beneficial effects of the present invention are: 1. By using 1, 2, p -norm instead of F-norm to measure the loss of matrix factorization, the problem of singular value sensitivity in the existing kernel non-negative matrix factorization algorithm is solved, and the enhancement The stability of the kernel non-negative matrix factorization algorithm proposed by the present invention; 2. The sparseness of the feature representation of the algorithm of the present invention is enhanced by adding a regularization restriction on the coefficient matrix H to the objective function; 3.
  • the kernel non-negative matrix factorization with a new objective function is integrated with the fractional power inner product kernel function to obtain a sparse fractional inner product nonlinear non-negative matrix factorization algorithm with efficient recognition performance, which solves the hyperparameters of the polynomial kernel function Problems that can only be integers improve the discriminative ability of the algorithm.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Complex Calculations (AREA)

Abstract

一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质。所述方法包括:利用l 2,p-范数刻画矩阵分解后的损失度;利用矩阵的l 1-范数增强对特征的稀疏性表示,在损失函数中加入关于矩阵H的正则项;通过损失度刻画步骤和稀疏性增强步骤,构成目标函数F(W,H);获得分数次幂内积核非负矩阵分解的更新迭代公式。该方法能够解决核非负矩阵分解算法中对奇异值敏感的问题,并增强了算法对特征表示的稀疏性,还解决了多项式核函数的超参数只能为整数的问题。

Description

一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质 技术领域
本发明涉及数据处理技术领域,尤其涉及一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质。
背景技术
随着信息化时代的到来,利用人体固有的生理特征和行为特征进行个人身份鉴定的生物识别技术成为了一个最活跃的研究领域之一。在生物识别技术的众多分支中,最容易被人们接受的一个技术是人脸识别技术,这是由于相对于其他生物识别技术而言,人脸识别具有无侵害性、非强制性、非接触性和并发性。
人脸识别技术包含两个阶段,第一阶段是特征提取,也就是提取人脸图像中的人脸特征信息,这一阶段直接决定了人脸识别技术的好坏;第二阶段是身份鉴定,根据提取出的特征信息进行个人身份鉴定。主成分分析(PCA)与奇异值分解(SVD)都是较为经典的特征提取方法,但是这两种方法提出的特征向量通常含有负元素,因此在原始样本为非负数据下,这些方法不具有合理性与可解释性。非负矩阵分解(NMF)是一种处理非负数据的特征提取方法,它的应用非常广泛,比如高光谱数据处理、人脸图像识别等。NMF算法在原始样本非负数据矩阵分解过程中,对提取的特征具有非负性限制,即分解后的所有分量都是非负的,因而可以提取非负的稀疏特征。NMF算法的实质也就是将非负矩阵X近似分解为基图像矩阵W和系数矩阵H的乘积,即X≈WH,且W和H都是非负矩阵。这样矩阵X的每一列就可以表示成矩阵W列向量的非负线性组合,这也符合NMF算法的构造依据——对整体的感知是由对组成整体的部分的感知构成的(纯加性)。近年来,学者们提出了许多对NMF变形的算法,例如,加强局部限制的局部NMF算法(LNMF)、整合判别信息的判别NMF算法(DNMF)、针对对称矩阵提出的对称NMF算法(SNMF)。这些NMF算法都是线性方法。然而,在人脸识别中,由于姿势、光照、遮挡、年龄等因素的变化,脸部图像的分布往往很复杂、呈非线性的,线性方法就不再适用。因此,我们需要基于NMF提出一个非线性模型,这个问题也非常具有挑战性。
为处理非线性问题,核方法是一种有效方法,其将线性算法拓展为非 线性算法提供了一个精美的理论框架。核方法的基本思想是通过使用一个非线性映射函数将原始数据映射到高维特征空间中,使得被映射后的数据线性可分,然后将线性算法应用到被映射后的数据上。在核方法中,最关键的部分是核技巧的使用,通过利用核函数取代被映射数据的内积,因而不需要知道非线性映射函数的具体解析式。核技巧的使用降低了将映射扩展到功能空间即再生核希尔伯特空间(RKHS)的难度。多项式核与高斯核是两个常用的核函数。利用核方法,将线性算法NMF推广为非线性NMF算法(NLNMF),因此,NLNMF算法的主要思路是是通过非线性映射函数φ将矩阵X映射到高维特征空间中,并在这个特征空间中,利用NMF算法,将矩阵φ(X)近似分解为两个矩阵φ(W)与H的乘积,即φ(X)≈φ(W)H,且W和H为非负矩阵。
现存的NLNMF算法有多项式核非负矩阵分解算法(PNMF)和高斯核非负矩阵分解算法(RBFNMF),它们的损失函数都是基于F-范数的平方构建的。在人脸识别中,由于光照、遮挡物等的影响,使得人脸图像数据往往含有噪声与异常值,然而,F-范数对异常值较为敏感,以至于PNMF与RBFNMF算法的稳定性较差。
下面是对以下三种方法的具体说明:
1.核方法
设{x 1,x 2,…,x n}是原始样本空间中的一组数据。核方法的主要思想是通过一个非线性映射函数φ(·)将样本从原始空间映射到一个更高维的特征空间,使得样本在这个特征空间内线性可分,且只要原始空间是有限维的,那么一定存在一个这样的高维特征空间。在这个特征空间内,就可以采用线性方法进行处理样本数据。但是特征空间维数可能很高,甚至是无穷维的,非线性映射的具体形式也很难确定。为了避开这些障碍,可以巧妙地利用核函数:
k(x i,x j)=<φ(x i),φ(x j)>=φ(x i) Τφ(x j),
即x i与x j在特征空间中的内积可以通过利用它们在原始样本空间中函数k(·,·)来计算。这样不仅解决了这些问题,还简化了计算过程。
常用的核函数有多项式核函数
Figure PCTCN2018095554-appb-000001
d>0和高斯核函数(RBF)k(x i,x j)=exp(-||x i-x j|| 2/(2δ 2))。
2.非线性非负矩阵分解算法(NLNMF)
NLNMF的主要目的是利用核方法解决NMF在非线性问题中的应用。首先将原始空间中的样本数据
Figure PCTCN2018095554-appb-000002
通过映射函数φ(·),映射到一个高维特征空间中,得到被映射的样本数据φ(X)=[φ(x 1),φ(x 2),…,φ(x n)],使得样本数据线性可分。然后,在高维特征空间中利用NMF算法处理被映射的数据,将φ(X)近似分解为两个矩阵φ(W)与H的乘积,即
φ(X)≈φ(W)H,
其中W
Figure PCTCN2018095554-appb-000003
是基图像矩阵,
Figure PCTCN2018095554-appb-000004
是系数矩阵。为了衡量在矩阵分解过程中的损失,我们需要构建损失函数F(W,H),损失函数的值越小,分解出的矩阵越具有合理性。因此,NLNMF的需要解决的最优化问题为:
Figure PCTCN2018095554-appb-000005
在NLNMF算法中,有两个主要因素影响其效能。其中,最主要的因素是核函数k(·,·)的选择,核函数隐式地定义了高维特征空间,若核函数选择不合适,那么意味着将样本数据映射到了一个不合适的特征空间,很可能导致性能不佳。另一个主要因素是损失函数F(W,H)的构建。损失函数在一定程度上决定着NLNMF算法的精度,不同的损失函数侧重点不同,而且并不是所有的损失函数都能在特征空间中运用,因此损失函数的选取也至关重要。常用的损失函数是根据F-范数构建的F F(W,H),即
Figure PCTCN2018095554-appb-000006
3.基于多项式核的非线性非负矩阵分解算法(PNMF)
多项式核非负矩阵分解算法(PNMF)的损失函数为F F(W,H),它基于多项式核函数求解优化问题(1),得到W和H的更新迭代公式为:
Figure PCTCN2018095554-appb-000007
Figure PCTCN2018095554-appb-000008
Figure PCTCN2018095554-appb-000009
其中
Figure PCTCN2018095554-appb-000010
Ω是一个对角矩阵,其对角元素为
Figure PCTCN2018095554-appb-000011
由此可见:1.目前非线性非负矩阵分解算法的损失函数都是基于F-范数构建的,然而F-范数具有两大缺陷,即对异常值比较敏感和稀疏性差,这使得算法的稳定性较差,不能较好地抵抗人脸识别中姿势和光照的变化;2.多项式核非负矩阵分解(PNMF)中多项式核函数的幂指数只能为整数,当幂指数为分数时不能保证它仍是一个核函数,而且当幂指数为整数时削弱了核函数的判别能力。
发明内容
本发明提供了一种非线性非负矩阵分解人脸识别的构建方法,包括如下步骤:
损失度刻画步骤:利用矩阵的l 2,p-范数刻画矩阵分解后的损失度;
稀疏性增强步骤:利用矩阵的l 1-范数增强对特征的稀疏性表示,在损失函数中加入关于矩阵H的正则项;
目标函数构成步骤:通过损失度刻画步骤和稀疏性增强步骤,构成目标函数F(W,H);
获得分数次幂内积核非负矩阵分解的更新迭代公式步骤:使用分数次幂内积核函数,构成待求解的最优化问题,通过利用梯度下降法求解H,指数梯度下降法求解W,从而获得分数次幂内积核非负矩阵分解的更新迭代公式。
作为本发明的进一步改进,该构建方法还包括收敛性验证步骤,在收敛性验证步骤中,通过构造辅助函数从而证明了算法的收敛性。
作为本发明的进一步改进,所述分数次幂内积核非负矩阵分解的更新迭代公式为:
Figure PCTCN2018095554-appb-000012
Figure PCTCN2018095554-appb-000013
Figure PCTCN2018095554-appb-000014
本发明提供的一种非线性非负矩阵分解人脸识别方法,还包括训练步骤,所述训练步骤包括:
第一步骤:将训练样本图像转化为训练样本矩阵X,设置误差阈值ε、最大迭代次数I max,对基图像矩阵W和系数矩阵H进行初始化;
第二步骤:利用分数次幂内积核非负矩阵分解的更新迭代公式,更新W和H;
第三步骤:如果目标函数F(W,H)≤ε或迭代次数达到I max,就停止迭代,输出矩阵W和H,否则,执行第二步骤。
作为本发明的进一步改进,该非线性非负矩阵分解人脸识别方法还包括在训练步骤之后再执行测试步骤,所述测试步骤包括:
第四步骤:计算训练样本中每类的平均特征向量m j(j=1,…,c);
第五步骤:对于测试样本y,计算其特征向量h y
第六步骤:若
Figure PCTCN2018095554-appb-000015
那么将样本y归于第l类。
作为本发明的进一步改进,该训练步骤中所述分数次幂内积核非负矩阵分解的更新迭代公式为:
Figure PCTCN2018095554-appb-000016
Figure PCTCN2018095554-appb-000017
Figure PCTCN2018095554-appb-000018
本发明还提供了一种非线性非负矩阵分解人脸识别系统,包括:存储器、处理器以及存储在所述存储器上的计算机程序,所述计算机程序配置 为由所述处理器调用时实现所述的方法的步骤。
本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序配置为由处理器调用时实现所述的方法的步骤。
本发明的有益效果是:1.通过利用l 2,p-范数替代F-范数去度量矩阵分解的损失度,解决了现存的核非负矩阵分解算法中对奇异值敏感的问题,增强了本发明提出的核非负矩阵分解算法的稳定性;2.通过在目标函数中加入对系数矩阵H的正则化限制,增强了本发明算法对特征表示的稀疏性;3.通过将提出的具有新的目标函数的核非负矩阵分解与分数次幂内积核函数相整合,得到了具有高效的识别性能的稀疏分数内积非线性非负矩阵分解算法,解决了多项式核函数的超参数只能为整数的问题,提高了算法的判别能力。
附图说明
图1是本发明算法构造过程流程图;
图2是本发明算法执行过程流程图;
图3是本发明提出的算法与相关算法(KLPP,LNMF,PNMF)在CMU PIE人脸数据库上的识别率比较图;
图4是本发明算法的收敛曲线图。
具体实施方式
如图1所示,本发明公开了一种非线性非负矩阵分解人脸识别的构建方法,具体为公开了一种基于l 2,p模的非线性非负矩阵分解人脸识别的构建方法。
该非线性非负矩阵分解人脸识别的构建方法,包括如下步骤:
损失度刻画步骤:利用矩阵的l 2,p-范数刻画矩阵分解后的损失度;
稀疏性增强步骤:利用矩阵的l 1-范数增强对特征的稀疏性表示,在损失函数中加入关于矩阵H的正则项;
目标函数构成步骤:通过损失度刻画步骤和稀疏性增强步骤,构成目标函数F(W,H);
获得分数次幂内积核非负矩阵分解的更新迭代公式步骤:使用分数次幂内积核函数,构成待求解的最优化问题,通过利用梯度下降法求解H,指数梯度下降法求解W,从而获得分数次幂内积核非负矩阵分解的更新迭代公式。
该构建方法还包括收敛性验证步骤,在收敛性验证步骤中,通过构造辅助函数从而证明了算法的收敛性。
所述分数次幂内积核非负矩阵分解的更新迭代公式为:
Figure PCTCN2018095554-appb-000019
Figure PCTCN2018095554-appb-000020
Figure PCTCN2018095554-appb-000021
如图2所示,本发明公开了一种非线性非负矩阵分解人脸识别方法,包括训练步骤,所述训练步骤包括:
第一步骤:将训练样本图像转化为训练样本矩阵X,设置误差阈值ε、最大迭代次数I max,对基图像矩阵W和系数矩阵H进行初始化;
第二步骤:利用分数次幂内积核非负矩阵分解的更新迭代公式,更新W和H;
第三步骤:如果目标函数F(W,H)≤ε或迭代次数达到I max,就停止迭代,输出矩阵W和H,否则,执行第二步骤。
该非线性非负矩阵分解人脸识别方法还包括在训练步骤之后再执行测试步骤,所述测试步骤包括:
第四步骤:计算训练样本中每类的平均特征向量m j(j=1,…,c);
第五步骤:对于测试样本y,计算其特征向量h y
第六步骤:若
Figure PCTCN2018095554-appb-000022
那么将样本y归于第l类。
该非线性非负矩阵分解人脸识别方法所述训练步骤中的所述分数次幂内积核非负矩阵分解的更新迭代公式为:
Figure PCTCN2018095554-appb-000023
Figure PCTCN2018095554-appb-000024
Figure PCTCN2018095554-appb-000025
本发明还公开了一种非线性非负矩阵分解人脸识别系统,包括:存储器、处理器以及存储在所述存储器上的计算机程序,所述计算机程序配置为由所述处理器调用时实现所述的方法的步骤。
本发明还公开了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序配置为由处理器调用时实现所述的方法的步骤。
关键词解释(注意:大致说明本发明涉及到的一些关键词语的概念)
1.符号说明
X      矩阵
x j     矩阵X的第j列
x ij    矩阵X的第ij个元素
A⊙B   矩阵A与B中对应元素的积
Figure PCTCN2018095554-appb-000026
    矩阵A与B中对应元素的商
A d     矩阵A中每个元素的d次幂
2.矩阵的范数
设向量
Figure PCTCN2018095554-appb-000027
则向量x的p-范数为
Figure PCTCN2018095554-appb-000028
p>0。对于矩阵
Figure PCTCN2018095554-appb-000029
常用的矩阵范数有以下几种:
1.l 1-范数:
Figure PCTCN2018095554-appb-000030
2.F-范数:
Figure PCTCN2018095554-appb-000031
3.l 2,p-范数:
Figure PCTCN2018095554-appb-000032
根据上述陈述,我们可以发现,当l 2,p-范数中p=2时,它就是F-范数;
也就是说l 2,p-范数包含F-范数。
3.非负矩阵分解(Non-negative Matrix Factorization,NMF)NMF的基本思想是将一个非负样本矩阵
Figure PCTCN2018095554-appb-000033
近似分解为两个非负矩阵的乘积,即:
X≈WH,
其中,
Figure PCTCN2018095554-appb-000034
Figure PCTCN2018095554-appb-000035
分别被称为基图像矩阵和系数矩阵。并且,通过构建损失函数度量X与WH之间的逼近程度,通常损失函数是基于F-范数被定义的,为:
Figure PCTCN2018095554-appb-000036
4.核函数(Kernel Function)
令χ为输入空间,k(·,·)是定义在χ×χ上的对称函数,则k是核函数当且仅当对于任意数据D={x 1,x 2,…,x n},核矩阵K总是半正定的:
Figure PCTCN2018095554-appb-000037
为了解决多项式核的幂指数参数只能为整数的问题与增强算法的判别能力,本发明引入了分数次幂内积核函数。设
Figure PCTCN2018095554-appb-000038
是一个m维的列向量,且在本发明中定义
Figure PCTCN2018095554-appb-000039
d>0。
定理1:对于任意的向量
Figure PCTCN2018095554-appb-000040
和正实数
Figure PCTCN2018095554-appb-000041
函数k被定义为:
Figure PCTCN2018095554-appb-000042
那么,k是一个核函数。我们称此函数为分数次幂内积核函数。
1.新KNMF的提出
目标函数的构建
为了增进现存核非负矩阵分解算法的稳健性性能,本发明利用l 2,p-范数(0<p≤2)刻画损失函数,即:
Figure PCTCN2018095554-appb-000043
其中,令
Figure PCTCN2018095554-appb-000044
则损失函数可以写为:
Figure PCTCN2018095554-appb-000045
为了增强特征的稀疏性表示,我们对系数矩阵H进行了正则化限制,因此我们提出的新核非负矩阵分解的目标函数为:
F(W,H)=F l(W,H)+λ||H|| 1,  (2)
其中λ≥0是正则项参数。
为了利用新建的分数次幂内积核函数求解目标函数(3)中的两个未知非负矩阵W和H,我们将目标函数转化为两个子目标函数,分别为:
Figure PCTCN2018095554-appb-000046
其中W固定,
Figure PCTCN2018095554-appb-000047
其中H固定.
则,问题(2)也演变成了两个子问题,分别为:
min f 1(H)s.t.H≥0,  (3)
min f 2(W)s.t.W≥0.  (4)
对系数矩阵H的学习
对于子问题(3),采用梯度下降法对系数矩阵H进行求解,有:
Figure PCTCN2018095554-appb-000048
其中
Figure PCTCN2018095554-appb-000049
是关于h k的步长向量,
Figure PCTCN2018095554-appb-000050
是f 1(H)关于h k的梯度,可以计算得:
Figure PCTCN2018095554-appb-000051
其中
Figure PCTCN2018095554-appb-000052
是一对角矩阵,
Figure PCTCN2018095554-appb-000053
是它的对角元素;1 r×1是r×1维元素全为1的列向量。将公式(6)带入公式(5)中有
Figure PCTCN2018095554-appb-000054
为了保证h k的非负性,令:
Figure PCTCN2018095554-appb-000055
因此,选择步长向量为:
Figure PCTCN2018095554-appb-000056
将梯度
Figure PCTCN2018095554-appb-000057
与步长向量
Figure PCTCN2018095554-appb-000058
带入公式(5)中,得到h k的更新迭代公式为:
Figure PCTCN2018095554-appb-000059
可将此更新迭代公式转化为矩阵形式,且有以下定理。
定理2:固定矩阵W,目标函数f 1(H)是非增的,当子问题(3)中的系数矩阵H按以下迭代方式更新:
Figure PCTCN2018095554-appb-000060
其中
Figure PCTCN2018095554-appb-000061
是一对角矩阵,
Figure PCTCN2018095554-appb-000062
对基图像矩阵W的学习
对于子问题(4),固定矩阵H,对基图像矩阵W进行学习。我们定义f 2(W)为目标函数F(W,H)中至于变量W有关的矩阵,则
Figure PCTCN2018095554-appb-000063
利用梯度下降法的非线性推广指数梯度下降法,有:
Figure PCTCN2018095554-appb-000064
其中
Figure PCTCN2018095554-appb-000065
Figure PCTCN2018095554-appb-000066
是一个步长列向量,
Figure PCTCN2018095554-appb-000067
是f 2(W)关于
Figure PCTCN2018095554-appb-000068
的梯度。
对于变量
Figure PCTCN2018095554-appb-000069
函数f 2(W)可以被表示为:
Figure PCTCN2018095554-appb-000070
可以计算出函数
Figure PCTCN2018095554-appb-000071
关于
Figure PCTCN2018095554-appb-000072
的导数为:
Figure PCTCN2018095554-appb-000073
将公式(9)带入公式(8),得到
Figure PCTCN2018095554-appb-000074
为了保证w k
Figure PCTCN2018095554-appb-000075
的非负性,我们令
Figure PCTCN2018095554-appb-000076
因此,选择步长为:
Figure PCTCN2018095554-appb-000077
将公式(9)与(10)带入公式(8)中,可以求出关于
Figure PCTCN2018095554-appb-000078
的迭代公式为:
Figure PCTCN2018095554-appb-000079
根据公式
Figure PCTCN2018095554-appb-000080
可得到w k的更新迭代公式为:
Figure PCTCN2018095554-appb-000081
将其推广到矩阵形式可得到W的更新迭代公式,如以下定理。
定理3:固定矩阵H,目标函数f 2(H)是非增的,当子问题(4)中的基图像矩阵W按以下迭代方式更新:
Figure PCTCN2018095554-appb-000082
其中X d、W (t)d代表矩阵X、W (t)中的每个元素的d次幂,() 1/d代表矩阵中的每个元素的1/d次幂,
Figure PCTCN2018095554-appb-000083
是一对角矩阵,其对角元素
Figure PCTCN2018095554-appb-000084
综上所述,通过定理1和定理2,可以得到本发明提出的分数次幂内 积核非负矩阵分解的更新迭代公式,为:
Figure PCTCN2018095554-appb-000085
2.收敛性证明
定义1:对于任意的矩阵H和H (t),若满足条件
G(H,H (t))≥f(H),且G(H (t),H (t))=f(H (t)),
则称G(H,H (t))为函数f(H)的一个辅助函数。
引理1:如果G(H,H (t))是f(H)的一个辅助函数,那么f(H)在如下的更新法则下是单调不增的,
Figure PCTCN2018095554-appb-000086
接下来,我们通过构造辅助函数证明定理2与定理3的成立性,也就是证明本发明构造的新算法具有收敛性。
定理4:
Figure PCTCN2018095554-appb-000087
是一个对角矩阵,且对角元素为
Figure PCTCN2018095554-appb-000088
那么
Figure PCTCN2018095554-appb-000089
是f 1(H)的辅助函数。
证明:函数f 1(H)是关于变量H的二次函数,因此可以对其关于变量H进行精确二次泰勒展开,可得
Figure PCTCN2018095554-appb-000090
其中,
Figure PCTCN2018095554-appb-000091
分别是函数f 1(H)在H (t)处关于h i的一阶偏导数与二阶偏导数。则,
Figure PCTCN2018095554-appb-000092
很明显地可以看出,若对于所有的i,
Figure PCTCN2018095554-appb-000093
那么G 1(H,H (t))≥f 1(H)。不等式(13)成立的充要条件是矩阵
Figure PCTCN2018095554-appb-000094
是半正定的。为了证明这一条件是成立的,我们对矩阵
Figure PCTCN2018095554-appb-000095
中的元素进行了变形,构造出了矩阵P,其元素为
Figure PCTCN2018095554-appb-000096
显然,当矩阵P具有半正定性时,矩阵
Figure PCTCN2018095554-appb-000097
也具有半正定性。
对于任意的向量
Figure PCTCN2018095554-appb-000098
Figure PCTCN2018095554-appb-000099
因此,矩阵P和
Figure PCTCN2018095554-appb-000100
是半正定的,G 1(H,H (t))≥f 1(H)。而且显然当H=H (t)时,
Figure PCTCN2018095554-appb-000101
综上所述,G 1(H,H (t))是f 1(H)的辅助函数,证毕。
根据定义1与引理1,我们可知函数G 1(H,H (t))是函数f 1(H)的一个上限,且
Figure PCTCN2018095554-appb-000102
为了得到G 1(H,H (t))的最小值,我们求解它的导数并令其为0,有
Figure PCTCN2018095554-appb-000103
从而,
Figure PCTCN2018095554-appb-000104
两边同乘
Figure PCTCN2018095554-appb-000105
Figure PCTCN2018095554-appb-000106
可得h k的更新迭代公式为
Figure PCTCN2018095554-appb-000107
将其转化为矩阵形式,可得公式(7)。因此,定理2证明完毕,在更新迭代公式(7)下目标函数是非增的。
定理5:设
Figure PCTCN2018095554-appb-000108
是一个对称矩阵,其元素为
Figure PCTCN2018095554-appb-000109
那么,函数
Figure PCTCN2018095554-appb-000110
是f 2(W)的辅助函数。
证明:设矩阵A=K XX-2K XWH+H ΤK WWH,那么
Figure PCTCN2018095554-appb-000111
因此,
Figure PCTCN2018095554-appb-000112
Figure PCTCN2018095554-appb-000113
可以很明显的看出,当W=W (t)时,G(W (t),W (t))=f 2(W (t))。又因为,
Figure PCTCN2018095554-appb-000114
可得,G(W,W (t))-f 2(W)≥0,G(W,W (t))是f 2(W)的辅助函数,证毕。
设矩阵W的第k列w k未知,其他列都是已知的,对辅助函数G(W,W (t))关于w k求导,可得,
Figure PCTCN2018095554-appb-000115
Figure PCTCN2018095554-appb-000116
时,有
Figure PCTCN2018095554-appb-000117
通过计算,可得
Figure PCTCN2018095554-appb-000118
因此,w k的更新迭代公式为
Figure PCTCN2018095554-appb-000119
将其转化为矩阵形式可得公式(11),因此定理3成立。
3.特征提取
假设y是一个测试样本,非线性映射φ将其映射到特征空间中为,且φ(y)可以被表示为被映射后的基图像矩阵φ(W)的列向量的线性组合,为:
φ(y)=φ(W)h y
其中h y为φ(y)的特征向量。上式两边同乘φ(W) Τ,可得
φ(W) Τφ(y)=φ(W) Τφ(W)h y
即,
K Wy=K WWh y,
其中K Wy为一个核向量。因此,特征h y可以求出为
Figure PCTCN2018095554-appb-000120
其中,
Figure PCTCN2018095554-appb-000121
是矩阵K WW的广义逆。类似的,我们可以得到训练样本的平均特征向量。假设原始空间中有c类样本,其中第j类的训练样本数为n j(j=1,2,…,c),训练样本矩阵为X j,那么第j类的平均特征向量可以表示为:
Figure PCTCN2018095554-appb-000122
其中,
Figure PCTCN2018095554-appb-000123
是一个维数为n j×1维的全一列向量。
综上所述,本发明的人脸识别算法具体构建过程如下:
(1)利用矩阵的l 2,p-范数刻画矩阵分解后的损失度;
(2)利用矩阵的l 1-范数增强对特征的稀疏性表示,在损失函数中加入关于矩阵H的正则项;
(3)在本发明的算法中引入我们构建的具有较强灵活性与较高的判别能力的分数次幂内积核函数;
(4)通过利用梯度下降法及其推广指数梯度下降法推导出本发明算法的更新迭代公式;
(5)通过构造辅助函数证明了本发明算法的收敛性,从理论上保证了算法的合理性。
如图4所示,本发明所提出的算法的收敛性,不仅通过利用辅助函数在理论上进行了证明,而且在实验中也得到了验证,我们的算法具有较高的收敛性。
如图3和表1所示,通过在公开的人脸数据库中与相关算法进行实验比较,结果表明本发明开发的算法具有一定的优越性。
表1 本发明提出的算法(Our Method)与核局部保持投影(KLPP)、局部非负矩阵分解(LNMF)和多项式核非负矩阵分解(PNMF)在CMU PI E人脸数据库上的识别率(%)比较
(TN表示每一类的训练样本数)
TN 7 9 11 13 15 17
KLPP 61.77 63.60 61.60 62.78 63.27 61.41
LNMF 64.35 65.82 67.35 68.40 69.13 69.75
PNMF 52.60 54.05 55.66 57.00 56.76 57.52
Our Method 70.77 72.39 73.20 74.72 75.14 75.51
表1
本发明的有益效果是:1.通过利用l 2,p-范数替代F-范数去度量矩阵分解的损失度,解决了现存的核非负矩阵分解算法中对奇异值敏感的问题,增强了本发明提出的核非负矩阵分解算法的稳定性;2.通过在目标函数中加入对系数矩阵H的正则化限制,增强了本发明算法对特征表示的稀疏性;3.通过将提出的具有新的目标函数的核非负矩阵分解与分数次幂内积核函数相整合,得到了具有高效的识别性能的稀疏分数内积非线性非负矩阵分解算法,解决了多项式核函数的超参数只能为整数的问题,提高了算法的判别能力。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (8)

  1. 一种非线性非负矩阵分解人脸识别的构建方法,其特征在于,包括如下步骤:
    损失度刻画步骤:利用矩阵的l 2,p-范数刻画矩阵分解后的损失度;
    稀疏性增强步骤:利用矩阵的l 1-范数增强对特征的稀疏性表示,在损失函数中加入关于矩阵H的正则项;
    目标函数构成步骤:通过损失度刻画步骤和稀疏性增强步骤,构成目标函数F(W,H);
    获得分数次幂内积核非负矩阵分解的更新迭代公式步骤:使用分数次幂内积核函数,构成待求解的最优化问题,通过利用梯度下降法求解H,指数梯度下降法求解W,从而获得分数次幂内积核非负矩阵分解的更新迭代公式。
  2. 根据权利要求1所述的构建方法,其特征在于,该构建方法还包括收敛性验证步骤,在收敛性验证步骤中,通过构造辅助函数从而证明了算法的收敛性。
  3. 根据权利要求1所述的构建方法,其特征在于,所述分数次幂内积核非负矩阵分解的更新迭代公式为:
    Figure PCTCN2018095554-appb-100001
    Figure PCTCN2018095554-appb-100002
    Figure PCTCN2018095554-appb-100003
    其中,
    Figure PCTCN2018095554-appb-100004
    是非负样本矩阵,
    Figure PCTCN2018095554-appb-100005
    Figure PCTCN2018095554-appb-100006
    分别为第t次迭代时的基图像矩阵和系数矩阵,矩阵
    Figure PCTCN2018095554-appb-100007
    的元素为
    Figure PCTCN2018095554-appb-100008
    是一对角矩阵,其对角元素
    Figure PCTCN2018095554-appb-100009
    并且,( ) d与( ) 1/d分别代表矩阵中的每个元素的d次幂与1/d次幂,
    Figure PCTCN2018095554-appb-100010
  4. 一种非线性非负矩阵分解人脸识别方法,其特征在于,包括训练步骤,所述训练步骤包括:
    第一步骤:将训练样本图像转化为训练样本矩阵X,设置误差阈值ε、最大迭代次数I max,对基图像矩阵W和系数矩阵H进行初始化;
    第二步骤:利用分数次幂内积核非负矩阵分解的更新迭代公式,更新W和H;
    第三步骤:如果目标函数F(W,H)≤ε或迭代次数达到I max,就停止迭代,输出矩阵W和H,否则,执行第二步骤。
  5. 根据权利要求4所述的非线性非负矩阵分解人脸识别方法,其特征在于,该非线性非负矩阵分解人脸识别方法还包括在训练步骤之后再执行测试步骤,所述测试步骤包括:
    第四步骤:计算训练样本中每类的平均特征向量m j(j=1,…,c);
    第五步骤:对于测试样本y,计算其特征向量h y
    第六步骤:若
    Figure PCTCN2018095554-appb-100011
    那么将样本y归于第l类。
  6. 根据权利要求4所述的非线性非负矩阵分解人脸识别方法,其特征在于,所述分数次幂内积核非负矩阵分解的更新迭代公式为:
    Figure PCTCN2018095554-appb-100012
    Figure PCTCN2018095554-appb-100013
    Figure PCTCN2018095554-appb-100014
  7. 一种非线性非负矩阵分解人脸识别系统,其特征在于,包括:存储器、处理器以及存储在所述存储器上的计算机程序,所述计算机程序配置为由所述处理器调用时实现权利要求4-6中任一项所述的方法的步骤。
  8. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序配置为由处理器调用时实现权利要求4-6中任一项所述的方法的步骤。
PCT/CN2018/095554 2018-07-13 2018-07-13 一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质 WO2020010602A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/095554 WO2020010602A1 (zh) 2018-07-13 2018-07-13 一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/095554 WO2020010602A1 (zh) 2018-07-13 2018-07-13 一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质

Publications (1)

Publication Number Publication Date
WO2020010602A1 true WO2020010602A1 (zh) 2020-01-16

Family

ID=69143019

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/095554 WO2020010602A1 (zh) 2018-07-13 2018-07-13 一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质

Country Status (1)

Country Link
WO (1) WO2020010602A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967499A (zh) * 2020-07-21 2020-11-20 电子科技大学 基于自步学习的数据降维方法
CN112116017A (zh) * 2020-09-25 2020-12-22 西安电子科技大学 基于核保持的数据降维方法
CN112598130A (zh) * 2020-12-09 2021-04-02 华东交通大学 基于自编码器和奇异值阈值的土壤湿度数据重构方法和计算机可读存储介质
CN112966735A (zh) * 2020-11-20 2021-06-15 扬州大学 一种基于谱重建的监督多集相关特征融合方法
CN113705674A (zh) * 2021-08-27 2021-11-26 西安交通大学 一种非负矩阵分解聚类方法、装置及可读存储介质
CN114936597A (zh) * 2022-05-20 2022-08-23 电子科技大学 一种局部信息增强子空间真假目标特征提取方法
CN116189760A (zh) * 2023-04-19 2023-05-30 中国人民解放军总医院 基于矩阵补全的抗病毒药物筛选方法、系统及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120041906A1 (en) * 2010-08-11 2012-02-16 Huh Seung-Il Supervised Nonnegative Matrix Factorization
CN105469034A (zh) * 2015-11-17 2016-04-06 西安电子科技大学 基于加权式鉴别性稀疏约束非负矩阵分解的人脸识别方法
CN106897685A (zh) * 2017-02-17 2017-06-27 深圳大学 基于核非负矩阵分解的字典学习和稀疏特征表示的人脸识别方法及系统
CN107480636A (zh) * 2017-08-15 2017-12-15 深圳大学 基于核非负矩阵分解的人脸识别方法、系统及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120041906A1 (en) * 2010-08-11 2012-02-16 Huh Seung-Il Supervised Nonnegative Matrix Factorization
CN105469034A (zh) * 2015-11-17 2016-04-06 西安电子科技大学 基于加权式鉴别性稀疏约束非负矩阵分解的人脸识别方法
CN106897685A (zh) * 2017-02-17 2017-06-27 深圳大学 基于核非负矩阵分解的字典学习和稀疏特征表示的人脸识别方法及系统
CN107480636A (zh) * 2017-08-15 2017-12-15 深圳大学 基于核非负矩阵分解的人脸识别方法、系统及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU, JINGMIN ET AL.: "Nonlinear Non-Negative Matrix Factorization with Fractional Power Inner-Product Kernel for Face Recognition", 2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS, 15 December 2017 (2017-12-15), pages 406 - 411, XP033325403 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967499B (zh) * 2020-07-21 2023-04-07 电子科技大学 基于自步学习的数据降维方法
CN111967499A (zh) * 2020-07-21 2020-11-20 电子科技大学 基于自步学习的数据降维方法
CN112116017A (zh) * 2020-09-25 2020-12-22 西安电子科技大学 基于核保持的数据降维方法
CN112116017B (zh) * 2020-09-25 2024-02-13 西安电子科技大学 基于核保持的图像数据降维方法
CN112966735A (zh) * 2020-11-20 2021-06-15 扬州大学 一种基于谱重建的监督多集相关特征融合方法
CN112966735B (zh) * 2020-11-20 2023-09-12 扬州大学 一种基于谱重建的监督多集相关特征融合方法
CN112598130A (zh) * 2020-12-09 2021-04-02 华东交通大学 基于自编码器和奇异值阈值的土壤湿度数据重构方法和计算机可读存储介质
CN112598130B (zh) * 2020-12-09 2024-04-09 华东交通大学 基于自编码器和奇异值阈值的土壤湿度数据重构方法和计算机可读存储介质
CN113705674A (zh) * 2021-08-27 2021-11-26 西安交通大学 一种非负矩阵分解聚类方法、装置及可读存储介质
CN113705674B (zh) * 2021-08-27 2024-04-05 西安交通大学 一种非负矩阵分解聚类方法、装置及可读存储介质
CN114936597A (zh) * 2022-05-20 2022-08-23 电子科技大学 一种局部信息增强子空间真假目标特征提取方法
CN114936597B (zh) * 2022-05-20 2023-04-07 电子科技大学 一种局部信息增强子空间真假目标特征提取方法
CN116189760A (zh) * 2023-04-19 2023-05-30 中国人民解放军总医院 基于矩阵补全的抗病毒药物筛选方法、系统及存储介质
CN116189760B (zh) * 2023-04-19 2023-07-07 中国人民解放军总医院 基于矩阵补全的抗病毒药物筛选方法、系统及存储介质

Similar Documents

Publication Publication Date Title
WO2020010602A1 (zh) 一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质
Xue et al. Deep low-rank subspace ensemble for multi-view clustering
Zhang et al. Robust low-rank kernel multi-view subspace clustering based on the schatten p-norm and correntropy
Arora et al. Stochastic optimization for PCA and PLS
Xie et al. Implicit block diagonal low-rank representation
Li et al. Learning low-rank and discriminative dictionary for image classification
WO2020082315A2 (zh) 一种非负特征提取及人脸识别应用方法、系统及存储介质
CN109002794B (zh) 一种非线性非负矩阵分解人脸识别构建方法、系统及存储介质
Fang et al. Graph-based learning via auto-grouped sparse regularization and kernelized extension
WO2020118708A1 (zh) 基于e辅助函数的半非负矩阵分解的人脸识别方法、系统及存储介质
CN110717519A (zh) 训练、特征提取、分类方法、设备及存储介质
Jin et al. Multiple graph regularized sparse coding and multiple hypergraph regularized sparse coding for image representation
WO2021003637A1 (zh) 基于加性高斯核的核非负矩阵分解人脸识别方法、装置、系统及存储介质
Ge et al. Stacked denoising extreme learning machine autoencoder based on graph embedding for feature representation
Chen et al. Semi-supervised dictionary learning with label propagation for image classification
Song et al. MPPCANet: A feedforward learning strategy for few-shot image classification
Garratt et al. Two methods for the numerical detection of Hopf bifurcations
Bernstein et al. Manifold learning in regression tasks
Zhou et al. Face recognition based on the improved MobileNet
CN111325275A (zh) 基于低秩二维局部鉴别图嵌入的鲁棒图像分类方法及装置
Wei et al. Adaptive graph convolutional subspace clustering
Zhang et al. Adaptive graph regularization discriminant nonnegative matrix factorization for data representation
Yao A compressed deep convolutional neural networks for face recognition
CN110378262B (zh) 基于加性高斯核的核非负矩阵分解人脸识别方法、装置、系统及存储介质
Ziyaden et al. Long-context transformers: A survey

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18925921

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/05/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18925921

Country of ref document: EP

Kind code of ref document: A1