CN110619367A - Joint low-rank constraint cross-view-angle discrimination subspace learning method and device - Google Patents

Joint low-rank constraint cross-view-angle discrimination subspace learning method and device Download PDF

Info

Publication number
CN110619367A
CN110619367A CN201910891895.5A CN201910891895A CN110619367A CN 110619367 A CN110619367 A CN 110619367A CN 201910891895 A CN201910891895 A CN 201910891895A CN 110619367 A CN110619367 A CN 110619367A
Authority
CN
China
Prior art keywords
objective function
rank
subspace
low
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910891895.5A
Other languages
Chinese (zh)
Other versions
CN110619367B (en
Inventor
李骜
丁宇
孙广路
陈德云
林克正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201910891895.5A priority Critical patent/CN110619367B/en
Publication of CN110619367A publication Critical patent/CN110619367A/en
Application granted granted Critical
Publication of CN110619367B publication Critical patent/CN110619367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a combined low-rank constraint cross-view judgment subspace learning method and device. The method comprises the following steps: defining a target function of a double-low-rank judgment subspace learning formula; adopting a supervised regularization term as a strong constraint condition, and replating an objective function; adding a joint heterogeneous regularization term to draw up the objective function again; dividing the image data set into a test set and a training set; solving the value of each variable when the objective function value is minimized by utilizing the training set; solving the objective function to obtain a characteristic subspace; and obtaining all the characteristics of all the images in the data set through the characteristic subspace projection test set, and finally obtaining the recognition rate of the data set through a classifier. The method provided by the invention is combined with the isomerous regularization term as a constraint to construct a discriminant term for feature learning, can project isomorphic and isomerous information of a sample into a discriminant learning model for an image recognition and classification task, and promotes model adaptability and robustness.

Description

Joint low-rank constraint cross-view-angle discrimination subspace learning method and device
Technical Field
The embodiment of the invention relates to the field of image classification, in particular to a combined low-rank constraint cross-view judgment subspace learning method and device.
Background
In recent years, cross-perspective learning has attracted a great deal of attention because our images are often taken from various perspectives or from different sensor devices. In recent years, many cross-view discrimination subspace learning methods have been proposed, which not only attract the wide attention of people, but also have been successfully applied in practical work. However, the discriminant model constructed by the methods only separates samples of different categories with the same visual angle, samples of different visual angles of the same category are close to each other, and the isomorphic and heterogeneous information hidden in different visual angles is ignored.
Disclosure of Invention
In this context, embodiments of the present invention are intended to provide a joint low-rank constraint cross-view discrimination subspace learning method and apparatus, so as to use dual low-rank constraints to construct a robust representation matrix for feature learning, and make different view angle samples of the same category close to the same clustering center, and make the clustering centers of the same view angle samples of different categories far away from each other; in addition, different view angle characteristics are integrated into a unified projection subspace through joint constraint, and model adaptability and robustness are promoted.
In one aspect of the embodiments of the present invention, a joint low rank constraint cross-view discrimination subspace learning method is provided, including: dividing an image data set containing multiple visual angles into a test set and a training set, wherein the test set and the training set respectively contain data sets of two different visual angles; defining an objective function of a double low-rank judgment subspace learning model, wherein a first item in the objective function carries out low-rank constraint on class structures, elements in a matrix are measures of low-dimensional structure similarity of samples of two different classes, a second item carries out low-rank constraint on view variance, and elements in the matrix are measures of low-dimensional structure similarity of samples of two different viewing angles; adding a supervised image regularization term and a subspace for learning, and applying orthogonal constraint to the feature subspace to eliminate trivial solution so as to re-draw an objective function; adding a joint heterogeneous regularization term to re-draw an objective function; solving the value of each variable when the objective function value is minimized by utilizing the training set; solving through an objective function to obtain a feature subspace; and obtaining all the characteristics of all the images in the data set through the characteristic subspace projection test set, and finally obtaining the recognition rate of the data set through a classifier.
Further, the objective function in the step of defining the objective function of the dual low rank decision subspace learning model is as follows:
s.t.X=X(Zc+Zv)+E
wherein rank () represents the rank of the matrix, and X ═ X1,X2,...,Xk]Representing a training set containing k views,d represents the dimension of the original feature of each sample, miRepresents the number of ith view training samples (m ═ Σ)imi),ZcRobust representation matrix, Z, representing class structurevAnd a robust representation matrix representing the view variance structure, E represents an error matrix, and lambda is a balance parameter of the error matrix E.
Further, in the step of adding a supervised image regularization term and a subspace for learning, and applying orthogonal constraint to the feature subspace to eliminate trivial solution to re-draw the objective function, the re-drawn objective function is as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I
wherein | · | purple sweet*Is an alternative to rank () for P ∈ Rd×pIs a projection subspace, and α is a supervised image regularization termOf the balance parameter of
LcAnd LvIs the graph laplacian.
Further, in the step of adding a joint heterogeneous regularization term to re-draw an objective function, the re-drawn objective function is as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I
wherein, W0、W1And W2Is a weight matrix, gamma and epsilon are balance parameters of the weight matrix,is a matrix determined by a class label, Yj=[-1,-1,...,L-1,...,-1]T∈RLAnd j represents the jth column of Y, and if the jth sample belongs to the L-th class, the L-th element is L-1, and the rest elements are-1.
Further, an objective function is introduced to assist M to solve the minimization problem according to the following equation:
s.t.PTX=PTX(Zc+Zv)+E,P=M,PTP=I。
further, the method for solving the values of the variables when the objective function value is minimized includes: determining a Lagrange function in the objective function problem by an augmented Lagrange multiplier method; simplifying and minimizing the Lagrange function; using an alternating direction multiplier algorithm, among other variablesSolving the minimization iteratively for each variable under the condition of invariance; fixing other variables, deleting and ZcIndependent function terms, resulting in variable ZcThe target function expression of (2) is solved and updated through a singular value contraction operator; fixing other variables, deleting and ZvIndependent function terms, resulting in variable ZvThe target function expression of (2) is solved and updated through a singular value contraction operator; fixing other variables, deleting a function item irrelevant to the E to obtain a target function formula of the variable E, and updating the matrix E; fixing other variables, deleting a function item irrelevant to the P to obtain an objective function formula of the variable P, and solving to force the derivative of the objective function formula to be zero to obtain a closed form; fixing other variables, deleting a function item irrelevant to M to obtain an objective function formula of the variable M, deriving the objective function formula for M, and performing gradient solution on M by using the objective function formula and a partial derivative related to M; fixing other variables, deleting and W0Independent function term, to obtain variable W0The objective function formula of (2) is solved by forcing the derivative of the objective function formula to be zero; fixing other variables, deleting and W1Independent function term, to obtain variable W1The objective function formula of (2) is solved by forcing the derivative of the objective function formula to be zero; fixing other variables, deleting and W2Independent function term, to obtain variable W2The objective function formula of (2) is solved by forcing the derivative of the objective function formula to be zero; the lagrangian multipliers and parameters are updated on a case-by-case basis.
According to another aspect of the present invention, there is also provided a joint low rank constraint cross-view discrimination subspace learning apparatus, including: the storage unit is suitable for dividing an image data set containing multiple visual angles into a test set and a training set, wherein the test set and the training set respectively contain data sets of two different visual angles; the defining unit is suitable for defining an objective function of the double low-rank judgment subspace learning model, a first item in the objective function carries out low-rank constraint on class structures, elements in a matrix are measures of low-dimensional structure similarity of two samples of different classes, a second item carries out low-rank constraint on view variance, and the elements in the matrix are measures of the low-dimensional structure similarity of the samples of two different view angles; a first re-simulation unit, adapted to add supervised image regularization terms and subspaces for learning, and apply orthogonal constraints to the feature subspaces to remove trivial solutions to re-formulate an objective function; the second re-simulation unit is suitable for adding a joint heterogeneous regularization term to re-draw an objective function; the solving unit is suitable for solving the value of each variable when the objective function value is minimized by utilizing the training set; solving through an objective function to obtain a feature subspace; and the obtaining unit is suitable for obtaining all the characteristics of all the classes of images in the data set through the characteristic subspace projection test set, and finally obtaining the recognition rate of the data set through the classifier.
The invention provides a combined low-rank cross-view judgment subspace learning method and device, which are used for image classification, a robust expression matrix is decomposed into a double-low-rank matrix, the class structure and the view variance structure of multi-view image data are separated, and the classification effect of a model on multi-view images is enhanced; in addition, by adding a supervised image regularization item, data among the same type in a projection subspace is kept more compact, and the distance between images of different types under the same view angle can be maximized; the projection subspace is also constrained by adding a weight matrix, so that the projection subspace of the same type contains shared features of different viewing angles and independent features thereof; compared with other methods, the method has the advantages of higher recognition rate and more stable performance.
The embodiment of the invention adopts a new feature subspace learning model, and combines joint feature representation and feature learning into a unified framework; in the new model, a double-low-rank representation coefficient is used as subspace similarity measurement to guide feature learning; furthermore, class label-based linear regression is incorporated into the proposed model as another supervised regularization term to expand class boundaries and reduce intra-class sample distance, which may make the extracted features more suitable for classification tasks.
In addition, embodiments of the present invention also provide an iterative scheme using an Augmented Lagrange Multiplier (ALM) method and an alternating direction multiplier (ADMM) method, by which the objective function is effectively solved and convergence is ensured.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a flow diagram that schematically illustrates an exemplary process of a joint low-rank constrained cross-view discrimination subspace learning method, in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram schematically illustrating an example of a joint low rank constrained cross-view discrimination subspace learning apparatus according to an embodiment of the present invention;
FIG. 3 is an effect graph of an objective function of an embodiment of the present invention;
FIG. 4 is an exemplary comparison of partial samples of four public datasets according to an embodiment of the present invention;
FIG. 5 is a graph of classification results on Case8 in the PIE data set according to an embodiment of the present invention with respect to parameters λ, α, ε, γ;
FIG. 6 shows the classification result and the post-projection dimension d of the PIE data set Case8 and the Extended YaleB data set Case1 according to the embodiment of the present inventionpGraph of the relationship of (c).
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
Exemplary method
Fig. 1 schematically illustrates an exemplary process flow 100 of a joint low-rank constrained cross-view decision subspace learning method in accordance with an embodiment of the present disclosure.
As shown in fig. 1, after the process flow 100 is started, step S110 is first executed.
In step S110, the image data set containing multiple viewing angles is divided into a test set and a training set, wherein the test set and the training set respectively contain data sets of two different viewing angles.
That is, the test set may be divided into two sub data sets that differ in view.
The training set may also be divided into two sub data sets that differ in view angle.
Next, in step S120, an objective function of the dual low-rank decision subspace learning model is defined, in which a first term performs low-rank constraint on class structure, elements in the matrix are measures of low-dimensional structural similarity of samples of two different classes, and a second term performs low-rank constraint on view variance, and elements in the matrix are measures of low-dimensional structural similarity of samples of two different view angles. Further, the third term in the objective function represents a 1-norm constraint on the error matrix E.
Next, in step S130, supervised image regularization terms and the subspace used for learning are added, and orthogonal constraints are applied to the feature subspace to remove trivial solutions to re-formulate an objective function.
Next, in step S140, a joint heterogeneous regularization term is added to re-formulate the objective function.
Then, in step S150, the values of the variables when the objective function value is minimized are solved using the training set.
Thus, after the solution through the objective function, a feature subspace is obtained.
Finally, in step S160, all the features of all the category images in the data set are obtained by projecting the test set through the feature subspace, and finally, the recognition rate of the data set is obtained through a classifier.
As an example, the objective function in the step of defining the objective function of the dual low rank decision subspace learning model is as follows:
s.t.X=X(Zc+Zv)+E
wherein rank () represents the rank of the matrix, and X ═ X1,X2,...,Xk]Representing a training set containing k views,k is a positive integer, d represents the dimension of the original feature of each sample, miRepresents the number of ith view training samples (m ═ Σ)imi),ZcRobust representation matrix, Z, representing class structurevAnd a robust representation matrix representing the view variance structure, E represents an error matrix, and lambda is a balance parameter of the error matrix E. Where, i 1.., k.
As an example, in the step of adding a supervised image regularization term and a subspace for learning, and applying an orthogonal constraint to the feature subspace to eliminate a trivial solution to re-draw an objective function, the re-drawn objective function is, for example, as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I
wherein | · | purple sweet*Is an alternative to rank () for P ∈ Rd×pIs a projection subspace, and α is a supervised image regularization termOf the balance parameter of
LcAnd LvIs the graph laplacian.
As an example, in the step of adding a joint heterogeneous regularization term to re-draw an objective function, the re-drawn objective function is as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I
wherein, W0、W1And W2Is a weight matrix, gamma and epsilon are balance parameters of the weight matrix,is a matrix determined by a class label, Yj=[-1,-1,...,L-1,...,-1]T∈RLAnd j represents the jth column of Y, and if the jth sample belongs to the L-th class, the L-th element is L-1, and the rest elements are-1. Wherein j is 1,2, …, mi. L is the number of classes of the sample.
As an example, an objective function may be introduced to assist M in solving the minimization problem, for example, according to the following equation:
s.t.PTX=PTX(Zc+Zv)+E,P=M,PTP=I。
as an example, the solving method of the values of the respective variables when the objective function value is minimized may include steps S1501 to S1511, for example.
In step S1501, a lagrangian function in the objective function problem is determined by the augmented lagrangian multiplier method.
Next, in step S1502, the lagrangian function is subjected to simplification and minimization conversion.
Then, in step S1503, the minimization is iteratively solved for each variable under the condition that other variables are not changed by using the alternating direction multiplier algorithm; fixing other variables, deleting and ZcIndependent function terms, resulting in variable ZcThe target function expression of (2) is solved and updated through a singular value contraction operator.
Thus, in step S1504, the other variable is fixed (i.e., Z is fixed)vVariables other than Z), delete and ZvIndependent function terms, resulting in variable ZvThe target function expression of (2) is solved and updated through a singular value contraction operator.
Then, in step S1505, the other variables (i.e., variables other than E) are fixed, the function item independent of E is deleted, the objective function expression of the variable E is obtained, and the matrix E is updated.
Next, in step S1506, the other variables (i.e., variables other than P) are fixed, the function term irrelevant to P is deleted to obtain the objective function expression of the variable P, and the solution is performed to force the derivative of the objective function expression to zero to obtain a closed form.
Next, in step S1507, the other variables (i.e., variables other than M) are fixed, the function term irrelevant to M is deleted to obtain an objective function expression of the variable M, M is derived from the objective function expression, and M is gradient-solved by using the objective function expression and a partial derivative with respect to M.
In step S1508, other variables are fixed (i.e., W is fixed)0Variables other than W), delete and W0Independent function term, to obtain variable W0The objective function of (2) is solved by forcing the derivative of the objective function to zero.
In step S1509, other variables are fixed (i.e., W is fixed)1Variables other than W), delete and W1Independent function term, to obtain variable W1The objective function of (2) is solved by forcing the derivative of the objective function to zero.
In step S1510, the other variables are fixed (i.e., W is fixed)2Variables other than W), delete and W2Independent function term, to obtain variable W2The objective function of (2) is solved by forcing the derivative of the objective function to zero.
Thus, in step S1511, the lagrangian multiplier and the parameters are updated item by item.
Exemplary devices
Referring to fig. 2, a schematic structural diagram of a joint low-rank constraint cross-view angle discrimination subspace learning apparatus according to an embodiment of the present invention is schematically shown, where the apparatus may be disposed in a terminal device, for example, the apparatus may be disposed in an intelligent electronic device such as a desktop computer, a notebook computer, an intelligent mobile phone, and a tablet computer; of course, the apparatus according to the embodiment of the present invention may be provided in a server. The apparatus 300 of the embodiment of the present invention may include the following constituent elements: a storage unit 310, a definition unit 320, a first simulation unit 330, a second simulation unit 340, a solving unit 350 and an obtaining unit 360.
The storage unit 310 is adapted to divide the image data set containing multiple viewing angles into a test set and a training set, wherein the test set and the training set respectively contain data sets of two different viewing angles.
The defining unit 320 is adapted to define an objective function of the dual low-rank decision subspace learning model, where a first term in the objective function performs low-rank constraint on class structures, elements in a matrix are measures of low-dimensional structural similarity of two samples of different classes, a second term performs low-rank constraint on view variance, and elements in the matrix are measures of low-dimensional structural similarity of two samples of different view angles.
A first re-fitting unit 330 adapted to add supervised image regularization terms and subspaces for learning and apply orthogonal constraints to the feature subspaces to remove trivial solutions to re-formulate an objective function.
A second fitting unit 340 adapted to add a joint heterogeneous regularization term to re-formulate the objective function.
The solving unit 350 is adapted to solve the values of the variables when the objective function value is minimized by using the training set; and solving through an objective function to obtain a feature subspace. And
the obtaining unit 360 is adapted to obtain all the features of all the category images in the data set by projecting the test set through the feature subspace, and finally obtain the recognition rate of the data set through the classifier.
It should be understood that, the joint low-rank constraint cross-view discrimination subspace learning apparatus according to the embodiment of the present invention can perform the processing and sub-processing of the joint low-rank constraint cross-view discrimination subspace learning method described above with reference to fig. 1, and can achieve corresponding functions and effects, which are not described herein again.
PREFERRED EMBODIMENTS
In the preferred embodiment, the processing can be performed as follows in steps a to g.
Step a, dividing an image data set into a test set and a training set;
step b, defining an objective function of the discriminant feature subspace learning model,
s.t.X=X(Zc+Zv)+E (1)
wherein rank () represents the rank of the matrix, and X ═ X1,X2,...,Xk]Representing a training set containing k views,d represents the dimension of the original feature of each sample, miRepresents the number of ith view training samples (m ═ Σ)imi),ZcRobust representation matrix, Z, representing class structurevA robust representation matrix representing a view variance structure, E represents an error matrix, and lambda is a balance parameter of the error matrix E; a first item in the target function carries out low-rank constraint on class structures, elements in a matrix are used for measuring the low-dimensional structure similarity of two samples in different classes, a second item carries out low-rank constraint on view variance, and the elements in the matrix are used for measuring the low-dimensional structure similarity of the samples in two different view angles; in this way, the two structures can be separated from each other, and redundant information of the view structure is stripped from the class structure, so that the view structure can be better obtainedThe global class structure of (2).
Step c, further, the objective function is newly drawn up as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I (2)
wherein | · | purple sweet*Is an alternative method of rank () that,is the projection subspace, PTIs the transpose of P, and α is the supervised image regularization termOf the balance parameter of
The purpose of constructing formula (3) is to eliminate redundant information of view variance while maximally preserving intra-class feature information, so we make the low-dimensional intra-class feature Yc=PTXZcHas minimized intra-class similarity, features Y in low-dimensional viewing anglev=PTXZvThe dissimilarity between them is maximized, so we construct two graphical regularization terms:
Yc,iand Yc,jAre each YcI and j columns, Yv,iAnd Yv,jAre each YvIth and jth columns of (1), WcAnd WvAre two weight matrices for the graph, whose elements are defined as follows:
liAnd ljAre each XiAnd YcLabeling of images We formed image regularization terms using linear discriminant analysisAnd converting the ratio into a difference:
wherein L iscAnd LvIs WcAnd WvThe graph laplacian of (1);
step d, further, the objective function is newly drawn up as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I (6)
wherein, W0、W1And W2Is a weight matrix, gamma and epsilon are balance parameters of the weight matrix,is a matrix determined by a class label, Yj=[-1,-1,...,L-1,...,-1]T∈RLAnd j represents the jth column of Y, and if the jth sample belongs to the L-th class, the L-th element is L-1, and the rest elements are-1.
Constructing an objective function as formula (3), which contains a prior term and a regular term, as follows:
the prior term:W0is a global feature weight matrix expressed as feature information, W, common to different views1And W2The method is characterized in that the method is an independent characteristic weight matrix which is expressed as independent characteristic information in different visual angles, so that a shared structure and an independent structure are unified in the same frame by a learned projection subspace; adding Y1And Y2Class labels constitute a loss term, the experience risk of each feature is minimized by minimizing an a priori loss term, and therefore independent features of each view angle sample are learned, and therefore the recognition rate is improved.
The regularization term:||·||Fthe Frobenius norm of the matrix is represented, and the regularization direction enables a combined learning model to obtain stronger generalization capability and an effective closed form solution.
Therefore, unlike the cross-perspective discriminant of the traditional design, the cross-perspective discriminant learning of the present embodiment can well optimize intra-class tightening and inter-class divergence using some structural information from the potentially low-dimensional space at different perspectives.
Specifically, an objective function is introduced to the auxiliary variable M to solve the minimization problem, the objective function being expressed as:
s.t.PTX=PTX(Zc+Zv)+E,P=M,PTP=I,MTM=I (7);
step e, solving the values of all variables when the objective function value is minimized through a training set, wherein the values are as follows:
step e1, determining the lagrangian function in the objective function problem by the augmented lagrangian multiplier method alm (augmented lagrangian method), as follows:
wherein<A,B>=tr(ATB) Expressing the two-matrix inner product operator, Q is Lagrange multiplier, and gamma is Lagrange function of formula (8); mu and beta are parameters introduced by ALM; pTIs the transpose of the matrix P;
step e2, the lagrangian function is simplified and minimized and converted as follows:
s.t.PTP=I,MTM=I (10)
wherein
Step e3, iteratively solving minimization for each variable by using an alternating direction multiplier Algorithm (ADMM) (alternating transformation Method of multipliers); fixing other variables, deleting and ZcThe independent function terms are as follows:
using Taylor formula to make projection subspace target function formula be in Zc,tExpand and simplify as follows:
wherein Zc,tRepresents ZcAs a result of the t-th iteration,
Lc=Dc-Wcrepresents WcGraph laplacian matrix of Dc(i,i)=∑Wc(i,*)Is a diagonal matrix;
the problem is a classical rank minimization problem and is solved through a singular value contraction operator;
step e4 fixing other variable deletions with ZvThe unrelated function terms are as follows:
using Taylor formula to make projection subspace target function formula be in Zv,tExpand and simplify as follows:
wherein Zv,tRepresents ZvAs a result of the t-th iteration,
Lv=Dv-Wvrepresents WvGraph laplacian matrix of Dv(i,i)=∑Wv(i,*)Is a diagonal matrix;
the problem is a classical rank minimization problem and is solved through a singular value contraction operator;
and E5, fixing other variables, deleting a function item irrelevant to the E, and expanding the function item at the position where the E is equal to 0 by using a Taylor formula to obtain a target function formula of the variable E, wherein the target function formula is as follows:
when E is greater than 0, the solution is obtained
When E is less than 0, the solution is obtained
E6, fixing other variables, deleting a function item irrelevant to the P to obtain an objective function formula of the variable P, and solving for forcing the derivative of the objective function formula to be zero to obtain a closed form, wherein the closed form is as follows:
Pt+1=(2αXZnXT+μXnXn T)-1(βM+Xn(E-Q/μ)T) (16)
wherein the content of the first and second substances,Xn=X-X(Zc,t+1+Zv,t+1);(·)-1representing the inverse or pseudo-inverse of the matrix;
step e7, fixing other variables, deleting function items irrelevant to M, obtaining a target function formula of the variable M, and rewriting, as follows:
and then, carrying out derivation on the target function formula to obtain a derivative of M, wherein the derivative is as follows:
solving the M by a gradient method through an objective function and a partial derivative thereof;
step e8, fix other variables, delete and W0Independent function term, to obtain variable W0Then the solution for forcing the derivative of the objective function formula to be zero is carried out to obtain a closed form,the following were used:
step e9, fix other variables, delete and W1Independent function term, to obtain variable W1The solution for forcing the derivative of the objective function to be zero is then performed to obtain a closed form, as follows:
step e10, fix other variables, delete and W2Independent function term, to obtain variable W2The solution for forcing the derivative of the objective function to be zero is then performed to obtain a closed form, as follows:
step e11, updating Lagrange multipliers and parameters item by item, wherein the formula is as follows:
Qt+1=Qt+μ(Pt T(X-X(Zc+Zv))-Et)
μ=min(μmax,ψμ)(20)
q is the Lagrange multiplier matrix, psi and mu are the parameters introduced by ALM, mumaxExpressed as the maximum value within the allowable range of the parameter mu.
F, solving the objective function to obtain values of all variables, wherein P is a characteristic subspace obtained after solving;
and g, projecting the test set through the feature subspace to obtain all features of all kinds of images in the data set, and finally obtaining the recognition rate of the data set through a classifier.
The feature subspace is learned through the embodiment, then each training sample is projected to the feature subspace to obtain the feature of the training sample to extract the feature of the class to which the training sample belongs, and then the image is identified and classified according to the projected feature.
Further, it is assumed that P features of a certain face image in the training set are projected onto the feature subspace P, and then all features of the person image are obtained, and whether the certain image is the image of the person can be determined through the features.
The effect diagram of this preferred embodiment is shown in fig. 2. In the preferred embodiment, three published data sets are used, the data sets comprising two face data sets, one object data set, and a portion of an exemplary image as shown in FIG. 3.
One face data set of the present embodiment employs CMU-PIE, comprising a total of 68 different persons, each with 21 different lighting conditions and 9 different poses of images, and we employ 5 poses P05, P09, P14, P27, P29. The face data set is cropped to 64 x 64 using the test image size. The two postures are divided into a group, the group is divided into Case1: { P05, P09}, Case2: { P05, P14}, Case3: { P05, P27}, Case4: { P05, P29}, Case5: { P09, P14}, Case6: { P09, P27}, Case7: { P09, P29}, Case8: { P14, P27}, Case9: { P14, P29}, Case10: { P27, P29}10 groups, half of the images of each posture are randomly selected as a training set, and the rest of the images are used as a testing set.
Another face data set of this embodiment employs Extended YaleB, which includes 3814 2414 frontal images, each with about 64 images with different lighting conditions. The face data set is cropped to 32 x 32 using the size of the test image. The image of each person is divided into four different angles P1, P2, P3 and P4 near the front, wherein P1 and P2 form a group, and P3 and P4 form a group. From each set of perspectives, one set of angles is selected as a training set, another set of angles is selected as a test set, e.g., P1 and P2 as training sets, and P3 and P4 as test sets. The data set of inducing is divided into the following 4 groups, Case1: { P1, P3}, Case2: { P1, P4}, Case3: { P2, P3}, Case4: { P2, P4 }.
The object data set of the present embodiment employs COIL100, including 7200 images of 100 objects, each object having 72 images obtained at 5-degree intervals from consecutive angles. In this embodiment, all images in the object dataset are adjusted to 32 × 32, and the dataset is divided into two different perspective datasets, COIL1 and COIL 2. COIL1 contains 2 sets of angular images for each subject, V1[0 °,85 ° ] and V2[180 °,265 ° ]; COIL2 contains 2 sets of angular images for each object, the V3[90 °,175 ° ] and V4[270 °,355 ° ] images, being the second perspective. From each perspective, a set of angles is selected as a training set, and another angle is selected as a test set, for example, angles 1 and 2 are selected as the training set, and angle 3 and angle 4 are selected as the test set. The data sets were divided into 4 groups, Case1: { V1, V3}, Case2: { V1, V4}, Case3: { V2, V3}, Case4: { V2, V4 }.
This embodiment (Ours) is compared to several existing feature subspace learning methods, including PCA, LDA, LPP, RPCA + PCA, LatentLRR, SRRS, LRCS, and RMSL, respectively. Without loss of generality, the comparison method was tested separately using a KNN classifier. For KNN, the classification result is determined by the first K neighbors in the feature subspace, and K is set to 1 in an embodiment. Each set of data was performed five times for each data set, and the average recognition results were obtained as the recognition rates for each comparison method, which are shown in the tables below as the recognition rates (%) for the CMU-PIE, Extended YaleB, and COIL100 data sets, respectively.
TABLE 1 CMU-PIE data set
Methods Case1 Case2 Case3 Case4 Case5 Case6 Case7 Case8 Case9 Case10
PCA 75.24 72.02 72.61 74.99 71.58 71.14 72.90 68.57 71.63 71.92
LDA 76.80 73.68 74.08 69.79 72.21 72.24 69.20 63.51 72.93 74.06
LPP 62.40 59.25 60.17 61.97 65.72 66.13 63.34 59.29 58.10 63.72
LatLRR 84.93 81.79 81.87 84.86 81.98 83.09 83.67 75.31 79.71 80.33
SRRS 85.31 82.04 82.33 85.22 83.37 86.17 82.89 77.45 81.64 82.18
LRCS 95.68 91.83 92.30 95.48 89.60 89.23 95.57 87.42 90.88 90.64
RMSL 97.14 92.97 93.70 97.26 91.85 92.99 97.55 88.47 92.02 92.36
Ours 98.39 93.82 94.50 98.19 92.47 93.95 98.35 89.48 92.19 93.89
TABLE 2 Extended YaleB dataset
Methods Case1 Case2 Case3 Case4
PCA 52.59 67.43 66.69 52.30
LDA 53.97 69.03 68.21 53.74
LPP 56.20 70.29 71.19 55.89
LatLRR 70.16 72.88 74.58 70.09
SRRS 72.32 73.12 82.48 84.68
LRCS 71.73 73.25 75.91 72.95
RMSL 74.04 75.33 83.31 85.36
Ours 82.72 83.63 87.15 95.31
TABLE 3 COIL100 data set
By comparison of the data in the above table, this embodiment (Ours) showed higher recognition rates on all data sets than the other comparison methods. The reason is that the structure of the samples in the low-dimensional subspace is well mined using the joint low-rank constraint model, and its coefficients are effectively used as different sample similarity measures to constrain the learned projection subspace. Moreover, different visual angle sample characteristics are put into the same projection subspace through the joint low-rank constraint, so that the model can obtain better self-adaptive capacity and robustness.
For the algorithm for solving the objective function, the parameters are set to 0.1 μ, 0.5 β, and 1.1 ψ. For the parameters λ, α, ε, γ in equation (6), Case8 in the CMU-PIE dataset was chosen as the test dataset to investigate the [10 ] for λ, α, ε, γ, respectively-4,10-3,10-2,10-1,1,10,102,103,104]Impact of values on classification results. The raw data classification accuracy curves with λ, α, ε, γ are shown in FIG. 4. The results show that the classification performance is insensitive to the values of lambda, alpha, epsilon and gamma, and almost consistent classification results can be obtained in a wide range of lambda, alpha, epsilon and gamma, which explains the stability of the embodiment for parameter selection; for the projection dimension d in equation (2)pWe chose Case8 in the CMU-PIE dataset and Case1 in the Extended YaleB dataset as test datasets to study pairs dpRespectively take [50,100,150,200,250,300,350,400,450,500,550]Impact of values on classification results. Dimension d of raw data along with projection subspacepThe classification accuracy curve of (2) is shown in fig. 5. As can be seen from the results, the dimension d is projectedpIncreasing insensitivity of classification performance, the classification performance is dependent on the dimension d on a PIE data set with a sample dimension of 64 x 64pThe addition is slightly raised, and the optimal effect is achieved when the dimension is 300; classification Performance with dimension d on Extended YaleB dataset with sample dimension of 32 × 32pThe increase is slightly decreased, reaching an optimum at a dimension of 100.
The embodiment provides a cross-view angle discrimination subspace learning method combined with low-rank constraint, which is used for image feature extraction and recognition and classification tasks. A cross-view angle discrimination subspace learning model based on joint low-rank constraint is established, and a numerical solving method based on an alternating direction multiplier method is designed for the model to ensure the convergence of the algorithm. The experimental results on three different public test data sets demonstrate the superiority of this embodiment. In addition, when the training sample is interfered by noise, the experimental result of the embodiment is obviously improved and the performance is more stable than other comparison methods
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (7)

1. The joint low-rank constraint cross-view judgment subspace learning method is characterized by comprising the following steps of:
dividing an image data set containing multiple visual angles into a test set and a training set, wherein the test set and the training set respectively contain data sets of two different visual angles;
defining an objective function of a double low-rank judgment subspace learning model, wherein a first item in the objective function carries out low-rank constraint on class structures, elements in a matrix are measures of low-dimensional structure similarity of samples of two different classes, a second item carries out low-rank constraint on view variance, and elements in the matrix are measures of low-dimensional structure similarity of samples of two different viewing angles;
adding a supervised image regularization term and a subspace for learning, and applying orthogonal constraint to the feature subspace to eliminate trivial solution so as to re-draw an objective function;
adding a joint heterogeneous regularization term to re-draw an objective function;
solving the value of each variable when the objective function value is minimized by utilizing the training set;
solving through an objective function to obtain a feature subspace;
and obtaining all the characteristics of all the images in the data set through the characteristic subspace projection test set, and finally obtaining the recognition rate of the data set through a classifier.
2. The joint low-rank constrained cross-view decision subspace learning method of claim 1, wherein the objective function in the step of defining the objective function of the dual low-rank decision subspace learning model is as follows:
s.t.X=X(Zc+Zv)+E
wherein rank () represents the rank of the matrix, and X ═ X1,X2,...,Xk]Representing a training set containing k views,d represents the dimension of the original feature of each sample, miRepresents the number of ith view training samples (m ═ Σ)imi),ZcRobust representation matrix, Z, representing class structurevAnd a robust representation matrix representing the view variance structure, E represents an error matrix, and lambda is a balance parameter of the error matrix E.
3. The joint low-rank constrained cross-view decision subspace learning method of claim 2, wherein in the step of adding supervised image regularization terms and subspaces for learning, and applying orthogonal constraints to the feature subspaces to eliminate trivial solutions to re-formulate an objective function, the re-formulated objective function is as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I
wherein | · | purple sweet*Is an alternative to rank () for P ∈ Rd×pIs a projection subspace, and α is a supervised image regularization termOf the balance parameter of
LcAnd LvIs the graph laplacian.
4. The joint low-rank constraint cross-view decision subspace learning method according to any one of claims 1 to 3, wherein in the step of adding a joint heterogeneous regularization term to re-draw an objective function, the re-drawn objective function is as follows:
s.t.PTX=PTX(Zc+Zv)+E,PTP=I
wherein, W0、W1And W2Is a weight matrix, gamma and epsilon are balance parameters of the weight matrix,is a matrix determined by a class label, Yj=[-1,-1,...,L-1,...,-1]T∈RLAnd j represents the jth column of Y, and if the jth sample belongs to the L-th class, the L-th element is L-1, and the rest elements are-1.
5. The joint low-rank constrained cross-view decision subspace learning method according to any one of claims 1-4, characterized in that an objective function is introduced to assist M to solve a minimization problem according to the following formula:
s.t.PTX=PTX(Zc+Zv)+E,P=M,PTP=I。
6. the joint low-rank constrained cross-view decision subspace learning method according to any one of claims 1 to 5, wherein the solution method for the value of each variable when the objective function value is minimized comprises:
determining a Lagrange function in the objective function problem by an augmented Lagrange multiplier method;
simplifying and minimizing the Lagrange function;
solving the minimization iteratively for each variable under the condition that other variables are unchanged by using an alternating direction multiplier algorithm; fixing other variables, deleting and ZcIndependent function terms, resulting in variable ZcThe target function expression of (2) is solved and updated through a singular value contraction operator;
fixing other variables, deleting and ZvIndependent function terms, resulting in variable ZvThe target function expression of (2) is solved and updated through a singular value contraction operator;
fixing other variables, deleting a function item irrelevant to the E to obtain a target function formula of the variable E, and updating the matrix E;
fixing other variables, deleting a function item irrelevant to the P to obtain an objective function formula of the variable P, and solving to force the derivative of the objective function formula to be zero to obtain a closed form;
fixing other variables, deleting a function item irrelevant to M to obtain an objective function formula of the variable M, deriving the objective function formula for M, and performing gradient solution on M by using the objective function formula and a partial derivative related to M;
fixing other variables, deleting and W0Independent function term, to obtain variable W0The objective function formula of (2) is solved by forcing the derivative of the objective function formula to be zero;
fixing other variables, deleting and W1Independent function term, to obtain variable W1The objective function formula of (2) is solved by forcing the derivative of the objective function formula to be zero;
fixing other variables, deleting and W2Independent function term, to obtain variable W2The objective function formula of (2) is solved by forcing the derivative of the objective function formula to be zero;
the lagrangian multipliers and parameters are updated on a case-by-case basis.
7. Joint low rank restraint is striden visual angle and is distinguished subspace learning device, its characterized in that includes:
the storage unit is suitable for dividing an image data set containing multiple visual angles into a test set and a training set, wherein the test set and the training set respectively contain data sets of two different visual angles;
the defining unit is suitable for defining an objective function of the double low-rank judgment subspace learning model, a first item in the objective function carries out low-rank constraint on class structures, elements in a matrix are measures of low-dimensional structure similarity of two samples of different classes, a second item carries out low-rank constraint on view variance, and the elements in the matrix are measures of the low-dimensional structure similarity of the samples of two different view angles;
a first re-simulation unit, adapted to add supervised image regularization terms and subspaces for learning, and apply orthogonal constraints to the feature subspaces to remove trivial solutions to re-formulate an objective function;
the second re-simulation unit is suitable for adding a joint heterogeneous regularization term to re-draw an objective function;
the solving unit is suitable for solving the value of each variable when the objective function value is minimized by utilizing the training set; solving through an objective function to obtain a feature subspace; and
and the obtaining unit is suitable for obtaining all the characteristics of all the images in all the categories in the data set through the characteristic subspace projection test set, and finally obtaining the recognition rate of the data set through the classifier.
CN201910891895.5A 2019-09-20 2019-09-20 Joint low-rank constraint cross-view-angle discrimination subspace learning method and device Active CN110619367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910891895.5A CN110619367B (en) 2019-09-20 2019-09-20 Joint low-rank constraint cross-view-angle discrimination subspace learning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910891895.5A CN110619367B (en) 2019-09-20 2019-09-20 Joint low-rank constraint cross-view-angle discrimination subspace learning method and device

Publications (2)

Publication Number Publication Date
CN110619367A true CN110619367A (en) 2019-12-27
CN110619367B CN110619367B (en) 2022-05-13

Family

ID=68923583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910891895.5A Active CN110619367B (en) 2019-09-20 2019-09-20 Joint low-rank constraint cross-view-angle discrimination subspace learning method and device

Country Status (1)

Country Link
CN (1) CN110619367B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111508043A (en) * 2020-03-24 2020-08-07 东华大学 Woven fabric texture reconstruction method based on discrimination shared dictionary
CN111881413A (en) * 2020-07-28 2020-11-03 中国人民解放军海军航空大学 Multi-source time sequence missing data recovery method based on matrix decomposition
CN112183617A (en) * 2020-09-25 2021-01-05 电子科技大学 RCS sequence feature extraction method for sample and class label maximum correlation subspace
CN112508199A (en) * 2020-11-30 2021-03-16 同盾控股有限公司 Feature selection method, device and related equipment for cross-feature federated learning
CN113496149A (en) * 2020-03-20 2021-10-12 山东大学 Cross-view gait recognition method for subspace learning based on joint hierarchy selection
CN116935121A (en) * 2023-07-20 2023-10-24 哈尔滨理工大学 Dual-drive feature learning method for cross-region spectral image ground object classification
CN117237748A (en) * 2023-11-14 2023-12-15 南京信息工程大学 Picture identification method and device based on multi-view contrast confidence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893612A (en) * 2016-04-26 2016-08-24 中国科学院信息工程研究所 Consistency expression method for multi-source heterogeneous big data
CN106127218A (en) * 2016-05-25 2016-11-16 中山大学 A kind of multi views spectral clustering launched based on tensor
CN107545276A (en) * 2017-08-01 2018-01-05 天津大学 The various visual angles learning method of joint low-rank representation and sparse regression
CN109522956A (en) * 2018-11-16 2019-03-26 哈尔滨理工大学 A kind of low-rank differentiation proper subspace learning method
CN109583498A (en) * 2018-11-29 2019-04-05 天津大学 A kind of fashion compatibility prediction technique based on low-rank regularization feature enhancing characterization
CN109784360A (en) * 2018-12-03 2019-05-21 北京邮电大学 A kind of image clustering method based on depth multi-angle of view subspace integrated study
CN109858543A (en) * 2019-01-25 2019-06-07 天津大学 The image inferred based on low-rank sparse characterization and relationship can degree of memory prediction technique
CN110009017A (en) * 2019-03-25 2019-07-12 安徽工业大学 A kind of multi-angle of view multiple labeling classification method based on the study of visual angle generic character

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893612A (en) * 2016-04-26 2016-08-24 中国科学院信息工程研究所 Consistency expression method for multi-source heterogeneous big data
CN106127218A (en) * 2016-05-25 2016-11-16 中山大学 A kind of multi views spectral clustering launched based on tensor
CN107545276A (en) * 2017-08-01 2018-01-05 天津大学 The various visual angles learning method of joint low-rank representation and sparse regression
CN109522956A (en) * 2018-11-16 2019-03-26 哈尔滨理工大学 A kind of low-rank differentiation proper subspace learning method
CN109583498A (en) * 2018-11-29 2019-04-05 天津大学 A kind of fashion compatibility prediction technique based on low-rank regularization feature enhancing characterization
CN109784360A (en) * 2018-12-03 2019-05-21 北京邮电大学 A kind of image clustering method based on depth multi-angle of view subspace integrated study
CN109858543A (en) * 2019-01-25 2019-06-07 天津大学 The image inferred based on low-rank sparse characterization and relationship can degree of memory prediction technique
CN110009017A (en) * 2019-03-25 2019-07-12 安徽工业大学 A kind of multi-angle of view multiple labeling classification method based on the study of visual angle generic character

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王博岳: "流形上的低秩表示模型及应用", 《中国博士学位论文全文数据库 信息科技辑》 *
雷天鸣: "基于低秩稀疏的图像分类算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496149A (en) * 2020-03-20 2021-10-12 山东大学 Cross-view gait recognition method for subspace learning based on joint hierarchy selection
CN113496149B (en) * 2020-03-20 2023-04-07 山东大学 Cross-view gait recognition method for subspace learning based on joint hierarchy selection
CN111508043A (en) * 2020-03-24 2020-08-07 东华大学 Woven fabric texture reconstruction method based on discrimination shared dictionary
CN111508043B (en) * 2020-03-24 2022-11-25 东华大学 Woven fabric texture reconstruction method based on discrimination shared dictionary
CN111881413A (en) * 2020-07-28 2020-11-03 中国人民解放军海军航空大学 Multi-source time sequence missing data recovery method based on matrix decomposition
CN112183617A (en) * 2020-09-25 2021-01-05 电子科技大学 RCS sequence feature extraction method for sample and class label maximum correlation subspace
CN112183617B (en) * 2020-09-25 2022-03-29 电子科技大学 RCS sequence feature extraction method for sample and class label maximum correlation subspace
CN112508199A (en) * 2020-11-30 2021-03-16 同盾控股有限公司 Feature selection method, device and related equipment for cross-feature federated learning
CN116935121A (en) * 2023-07-20 2023-10-24 哈尔滨理工大学 Dual-drive feature learning method for cross-region spectral image ground object classification
CN116935121B (en) * 2023-07-20 2024-04-19 哈尔滨理工大学 Dual-drive feature learning method for cross-region spectral image ground object classification
CN117237748A (en) * 2023-11-14 2023-12-15 南京信息工程大学 Picture identification method and device based on multi-view contrast confidence
CN117237748B (en) * 2023-11-14 2024-02-23 南京信息工程大学 Picture identification method and device based on multi-view contrast confidence

Also Published As

Publication number Publication date
CN110619367B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN110619367B (en) Joint low-rank constraint cross-view-angle discrimination subspace learning method and device
Li et al. Structured sparse subspace clustering: A unified optimization framework
Hu et al. Low rank regularization: A review
Zhao et al. A subspace co-training framework for multi-view clustering
Shashua et al. Multi-way clustering using super-symmetric non-negative tensor factorization
US8861873B2 (en) Image clustering a personal clothing model
JP2004152297A (en) Method and system for integrating multiple cue
US9449395B2 (en) Methods and systems for image matting and foreground estimation based on hierarchical graphs
Shao et al. Dynamic dictionary optimization for sparse-representation-based face classification using local difference images
Smith et al. Joint face alignment with non-parametric shape models
Son et al. Spectral clustering with brainstorming process for multi-view data
Peng et al. Integrating feature and graph learning with low-rank representation
Deng et al. Nuclear norm-based matrix regression preserving embedding for face recognition
CN111027582A (en) Semi-supervised feature subspace learning method and device based on low-rank graph learning
Lin et al. Image set-based face recognition using pose estimation with facial landmarks
Meng et al. A general framework for understanding compressed subspace clustering algorithms
Pu et al. Multiview clustering based on robust and regularized matrix approximation
Wei et al. Spectral clustering steered low-rank representation for subspace segmentation
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis
Dong et al. Robust affine subspace clustering via smoothed ℓ 0-norm
Han et al. Tensor robust principal component analysis with side information: Models and applications
CN112417234B (en) Data clustering method and device and computer readable storage medium
Liu et al. Classification of nematode image stacks by an information fusion based multilinear approach
Shaw et al. Regression on manifolds using data‐dependent regularization with applications in computer vision
Li et al. Shadow determination and compensation for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant