CN111310813A - Subspace clustering method and device for potential low-rank representation - Google Patents

Subspace clustering method and device for potential low-rank representation Download PDF

Info

Publication number
CN111310813A
CN111310813A CN202010082142.2A CN202010082142A CN111310813A CN 111310813 A CN111310813 A CN 111310813A CN 202010082142 A CN202010082142 A CN 202010082142A CN 111310813 A CN111310813 A CN 111310813A
Authority
CN
China
Prior art keywords
matrix
rank
clustering
subspace
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010082142.2A
Other languages
Chinese (zh)
Inventor
曹江中
符益兰
戴青云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010082142.2A priority Critical patent/CN111310813A/en
Publication of CN111310813A publication Critical patent/CN111310813A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2323Non-hierarchical techniques based on graph theory, e.g. minimum spanning trees [MST] or graph cuts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Discrete Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a subspace clustering method and a subspace clustering device for potential low-rank representation, which are used for obtaining a characteristic matrix by acquiring data and preprocessing the data; representing subspace clustering by taking the unobserved potential low-rank of the data samples into consideration, and replacing a rank function by using a Schatten-p norm as a regular term to convert the problem that NP is difficult to solve into a problem which can be solved; introduction of lpThe norm constrains the error term to construct an optimized objective function of potential low-rank representation subspace clustering; then, solving an optimization objective function to obtain a low-rank representation matrix; computing an affinity matrix based on the low-rank representation matrix; and calculating and dividing the affinity matrix by using a spectral clustering algorithm to realize the potential low-rank expression subspace clustering of the data. The method solves the problems of insufficient low-rank representation samples and difficulty in solving the rank function, and enhances the potentialThe robustness of the low-rank representation subspace clustering improves the performance of potential low-rank representation subspace clustering.

Description

Subspace clustering method and device for potential low-rank representation
Technical Field
The invention relates to the technical field of pattern recognition calculation, in particular to a subspace clustering method and device for potential low-rank representation.
Background
As science progresses and artificial intelligence develops, pattern recognition processes and analyzes various forms of information characterizing a thing or phenomenon, thereby describing, recognizing, classifying, and interpreting the thing or phenomenon. Subspace clustering is widely used in many applications, such as images, video, text, etc.
The importance of the subspace naturally leads to a subspace partitioning challenge, whose goal is to partition (or group) the data into each cluster corresponding to the subspace. The main challenge facing subspace partitioning is how to efficiently deal with the coupling problem between noise correction and data partitioning. The potential low-rank representation subspace clustering is used as a subspace segmentation algorithm and can be regarded as an enhanced version of the low-rank representation, so that a more accurate segmentation result is obtained. The method can automatically extract significant features from damaged data, thereby generating effective features for classification, and has attracted extensive attention and high attention in the related technical field. The potential low-rank representation subspace can consider data which is not observed in the data, so that clustering performance is improved, and in order to solve the problem of insufficient samples, the related technology generally adopts a nuclear norm to constrain a regular term. However, the related art only considers the approximate constraint using the kernel norm as the rank function, but when the singular value of the matrix is large, the latter is too relaxed to estimate the rank of the matrix more accurately from the definition of the rank function and the kernel norm, so that the clustering performance is reduced, the accuracy is not high, and the robustness is not strong.
Disclosure of Invention
The invention provides a subspace clustering method and device for potential low-rank representation, and aims to solve the problems that low-rank representation samples are insufficient in existing subspace clustering, robustness of potential low-rank representation subspace clustering is not strong, and performance is insufficient.
In order to achieve the above purpose, the technical means adopted is as follows:
a method of subspace clustering of potential low rank representations, comprising the steps of:
s1, acquiring data and preprocessing the data to obtain a feature matrix;
s2, based on the characteristic matrix, utilizing Schatten-p norm as a regular term to replace a rank functionAnd use ofpThe norm is used as a constraint function of an error term to construct an optimized objective function of potential low-rank expression subspace clustering;
s3, solving the optimization objective function to obtain a low-rank expression matrix;
s4, calculating to obtain an affinity matrix based on the low-rank representation matrix;
and S5, calculating and partitioning the affinity matrix by using a spectral clustering algorithm to realize the potential low-rank expression subspace clustering of the data.
In the scheme, on the basis of representing subspace clustering by low rank, representing subspace clustering by using potential low rank of unobserved data samples, replacing a rank function by using Schatten-p norm as a regular term, converting the problem that NP is difficult to solve into a solvable problem, and introducing lpThe norm constraint error term solves the problems of insufficient low-rank representation samples and difficulty in solving rank functions, enhances the robustness of potential low-rank representation subspace clustering, and improves the performance of the potential low-rank representation subspace clustering.
Preferably, after the feature matrix is obtained in step S1, the method further includes performing normalization processing on each feature point in the feature matrix. In the preferred embodiment, the normalization process is convenient for subsequent data processing.
Preferably, the optimization objective function of the potential low-rank representation subspace clustering described in step S2 is specifically:
Figure BDA0002380689970000021
s.t.X=XZ+XL+E;
in the formula, Z is a subspace low-rank representation matrix, L is a subspace sparse representation matrix, X is the characteristic matrix, E is a reconstruction error matrix, and lambda is a hyperparameter for controlling loss penalty;
Figure BDA0002380689970000022
is Schatten-p norm and is defined as
Figure BDA0002380689970000023
Figure BDA0002380689970000024
Is 1pNorm, defined as
Figure BDA0002380689970000025
Figure BDA0002380689970000026
Preferably, the step S3 specifically includes the following steps:
introducing an auxiliary variable J, S to the optimization objective function, wherein Z is J, L is S:
Figure BDA0002380689970000027
s.t.X=XZ+XL+E,Z=J,L=S
and converting the constraint condition into an augmented Lagrange function by using a Lagrange multiplier method, and then performing iterative optimization on various variables in the augmented Lagrange function by using an alternating method until convergence so as to obtain a low-rank representation matrix.
Preferably, the step S5 specifically includes the following steps:
the degree matrix of the affinity matrix is calculated using the following formula:
Figure BDA0002380689970000028
wherein the degree matrix is a square matrix, Di,iIs an element of the ith row of the degree matrix, Si,jIs the element of the ith row and the jth column of the affinity matrix;
by using
Figure BDA0002380689970000031
Calculating a normalized laplacian matrix L:
in the formula, D is a degree matrix, S is an affinity matrix, and I is an identity matrix;
calculating the bits of the Laplace matrix LThe eigenvector is obtained by arranging the first k vectors with the largest eigenvalues into a column matrix X ═ X1,x2,…,xk]∈Rn*k
Converting the row vector of the column matrix into a unit vector to obtain a target matrix;
and clustering the target matrix by adopting a K-means clustering method to obtain K clustering results, thereby realizing the potential low-rank expression subspace clustering.
The invention also provides a subspace clustering device for potential low-rank representation, which comprises the following components:
the data preprocessing module is used for acquiring data and preprocessing the data to obtain a feature matrix;
an optimization target function construction module, based on the characteristic matrix, using Schatten-p norm as a regular term to replace a rank function and lpThe norm is used as a constraint function of an error term to construct an optimized objective function of potential low-rank expression subspace clustering;
the subspace representation matrix calculation module is used for solving the optimization objective function to obtain a low-rank representation matrix;
the affinity matrix calculation module is used for calculating to obtain an affinity matrix based on the low-rank representation matrix;
and the subspace clustering module is used for calculating and dividing the affinity matrix by utilizing a spectral clustering algorithm to realize the potential low-rank expression subspace clustering of the data.
In the scheme, the optimization objective function construction module converts the problem that NP is difficult to solve into a problem which can be solved by representing the subspace clustering by utilizing the potential low rank of the unobserved data samples on the basis of representing the subspace clustering by utilizing the low rank and replacing the rank function by utilizing the Schatten-p norm as the regular term, and introduces lpThe norm constraint error term solves the problems of insufficient low-rank representation samples and difficulty in solving rank functions, enhances the robustness of potential low-rank representation subspace clustering, and improves the performance of the potential low-rank representation subspace clustering.
Preferably, the data preprocessing module is further configured to perform normalization processing on each feature point in the feature matrix after the feature matrix is obtained.
Preferably, the optimization objective function of the potential low-rank representation subspace cluster constructed by the optimization objective function construction module specifically includes:
Figure BDA0002380689970000032
s.t.X=XZ+XL+E;
in the formula, Z is a subspace low-rank representation matrix, L is a subspace sparse representation matrix, X is the characteristic matrix, E is a reconstruction error matrix, and lambda is a hyperparameter for controlling loss penalty;
Figure BDA0002380689970000041
is Schatten-p norm and is defined as
Figure BDA0002380689970000042
0<p≤∞;
Figure BDA0002380689970000043
Is 1pNorm, defined as
Figure BDA0002380689970000044
Figure BDA0002380689970000045
Preferably, the subspace representation matrix calculation module is further configured to:
introducing an auxiliary variable J, S to the optimization objective function, wherein Z is J, L is S:
Figure BDA0002380689970000046
s.t.X=XZ+XL+E,Z=J,L=S
and converting the constraint condition into an augmented Lagrange function by using a Lagrange multiplier method, and then performing iterative optimization on various variables in the augmented Lagrange function by using an alternating method until convergence, thereby obtaining a low-rank expression matrix.
Preferably, the subspace clustering module is further configured to:
the degree matrix of the affinity matrix is calculated using the following formula:
Figure BDA0002380689970000047
wherein the degree matrix is a square matrix, Di,iIs an element of the ith row of the degree matrix, Si,jIs the element of the ith row and the jth column of the affinity matrix;
by using
Figure BDA0002380689970000048
Calculating a normalized laplacian matrix L:
in the formula, D is a degree matrix, S is an affinity matrix, and I is an identity matrix;
calculating the eigenvectors of the Laplace matrix, and arranging the first k vectors with the largest eigenvalues into a column matrix X ═ X1,x2,…,xk]∈Rn*k
Converting the row vector of the column matrix into a unit vector to obtain a target matrix;
and clustering the target matrix by adopting a K-means clustering method to obtain K clustering results, thereby realizing the potential low-rank expression subspace clustering.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the invention, subspace clustering represented by potential low rank can contain unobserved data samples, the problem of low rank representation sample shortage is solved, a Schatten-p norm is used for replacing a rank function, the Schatten-p norm has better approximation effect than a nuclear norm, the problem that NP is difficult to solve is converted into a problem which can be solved, and l is introducedpA norm constraint error term is used for constructing an optimization objective function of potential low-rank expression subspace clustering; the invention improves the robustness and clustering performance of the algorithm, and solves the problems of insufficient low-rank representation samples and robustness of potential low-rank representation subspace clustering in the conventional subspace clusteringPoor rod properties and insufficient performance.
In addition, the invention also provides a corresponding implementation device for the method based on the potential low-rank representation subspace clustering, so that the method has higher practicability and the device has corresponding advantages.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a block diagram of the apparatus of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
This embodiment 1 provides a subspace clustering method for potential low rank representation, as shown in fig. 1, including the following steps:
s1, acquiring data and preprocessing the data to obtain a feature matrix;
the preprocessing step adopts a commonly known means in the field, for example, preprocessing image data, namely normalizing and gray level correcting a target image, eliminating noise, extracting edges, regions or textures from target image features as experimental features, for example, extracting Gabor features from face image data, and extracting HOG features from a handwritten data set; respectively obtained feature matrix Xi=[x1,x2,…,xN]∈RD*NA feature matrix formed by data vectors, wherein each column vector in the feature matrix corresponds to a feature vector of a feature point, D is the dimension of a feature space, and N is the number of the feature points;
in order to facilitate subsequent data processing, the following formula is used for carrying out normalization processing on each feature point in the feature matrix:
Figure BDA0002380689970000051
in formula (II), x'iNormalized value, x, for the ith feature pointiAnd normalizing the value of the ith characteristic point.
S2, based on the characteristic matrix, using Schatten-p norm as a regular term to replace a rank function and using lpThe norm is used as a constraint function of an error term to construct an optimized objective function of the potential low-rank expression subspace clustering:
to solve the objective function minimization problem, an objective function of a potential low-rank representation subspace cluster of the rank minimization problem is first constructed in this step:
Figure BDA0002380689970000061
s.t.X=[X0,XH]Z+E;
wherein rank (·) represents the rank of the matrix, Z is a subspace low-rank representation matrix, X is the feature matrix, X0For the observed data sample matrix, XHThe data sample matrix is not observed, E is a reconstruction error matrix, and lambda is a hyper-parameter for controlling loss penalty;
however, the low rank matrix is usually solved by using the kernel norm as the best approximation norm of the rank function. In order to fully consider observation data, that is, to find an optimal low-rank representation matrix, in this embodiment, a Schatten-p norm of the matrix is used as a regularization term to estimate a rank function, lpThe norm is used as a constraint function of an error term, and an objective function is constructed by combining potential low-rank representation;
that is, in this step, an optimization objective function of potential low-rank representation subspace clustering is constructed:
Figure BDA0002380689970000062
s.t.X=XZ+XL+E;
in the formula, Z is a subspace low-rank representation matrix, L is a subspace sparse representation matrix, X is the characteristic matrix, and lambda is a hyper-parameter for controlling loss penalty;
Figure BDA0002380689970000063
is Schatten-p norm and is defined as
Figure BDA0002380689970000064
Figure BDA0002380689970000065
P is more than 0 and less than or equal to infinity, and the method is used for restraining the low rank of the matrix; rank is the number of matrix non-0 singular values, rank is non-convex and therefore is an NP-hard problem, Schatten-p norm is convex, Schatten-p norm is a convex approximation of rank, and Schatten-p norm minimization is used to approximate the low rank constraint.
Figure BDA0002380689970000066
Is 1pNorm, defined as
Figure BDA0002380689970000067
S3, solving the optimization objective function to obtain a low-rank expression matrix;
in this embodiment, by converting the optimization objective function into a convex optimization problem, auxiliary variables J, S are introduced, where Z is J, L is S:
Figure BDA0002380689970000068
s.t.X=XZ+XL+E,Z=J,L=S
and converting the constraint condition into an augmented Lagrange function by using a Lagrange multiplier method, and then performing iterative optimization on various variables in the augmented Lagrange function by using an alternating method until convergence so as to obtain a low-rank representation matrix. The method for solving the augmented Lagrangian function specifically comprises the following steps:
A1. setting parameters and initializing Z ═ J ═ 0, L ═ S ═ 0, E ═ 0, Y1=0,Y2=0,Y3=0,μ=10-6,maxu=106,ρ=1.1,and ε=10-6,Y1、Y2、Y3Z, L is multiplier term, mu, maxuThe parameter is a penalty term parameter, rho is an updating coefficient of the penalty parameter, and epsilon is a convergence threshold;
A2. updating J:
Figure BDA0002380689970000071
wherein G ═ Z + Y2
Specifically, the optimal solution for J is
Figure BDA0002380689970000072
Wherein Q isG
Figure BDA0002380689970000073
Respectively representing the left singular value and the right singular value of G, wherein delta is a diagonal matrix and is solved by the following formula:
Figure BDA0002380689970000074
wherein deltaiAnd σiIs the ith singular value of the matrices J and G, is given by the following steps
Figure BDA0002380689970000075
And (3) solving:
defining constants
Figure BDA0002380689970000076
Optimal solution x*Two cases are distinguished:
1) when deltaiV is less than or equal to1,x*0; 2) when deltaiGreater than v1When x*By iteratively calculating x(i+1)=δi-λp(x(i))p-1Obtaining an optimal solution;
according to x obtained by solving*Constructing a diagonal matrix ΔAnd finally obtaining the optimal solution of J
Figure BDA0002380689970000077
A3. And (4) updating S:
Figure BDA0002380689970000078
wherein M is L + Y3
Specifically, consistent with the solution of A1, where the optimal solution is
Figure BDA0002380689970000079
Wherein Q isM
Figure BDA00023806899700000710
Representing the left and right singular values of M, respectively.
A4. Updating Z: z ═ I + XTX)-1(XT(X-LX-E)+J+XTY1-Y2/μ)
A5. And L is updated: l ═ X ((X-XZ-E) XT+S+(Y1XT-Y3)/μ)(I+XXT)-1
A6. And E, updating:
Figure BDA00023806899700000711
wherein N is X-XZ-LX + Y1
Specifically, the method is consistent with the solving modes of A1 and A2, wherein the optimal solution x*There are three cases: 1) when deltaiLess than v1,x*0; 2) when deltaiIs equal to v1,x*Upsilon; 3) when deltaiGreater than v1When x*By iteratively calculating x(i+1)=δi-λp(x(i))p-1Obtaining an optimal solution;
A7. updating the multiplier: y is1=Y1+μ(X-XZ-LX-E),Y2=Y2+μ(Z-J),Y3=Y3+μ(L-S)
A8. Updating parameters: μ ═ min (ρ μ, max)u)。
It should be noted that other methods may also be used to solve the optimization problem of the optimization objective function in this step, and this embodiment only exemplifies one of the calculation methods.
S4, calculating to obtain an affinity matrix based on the low-rank representation matrix;
can adopt S | | | ZTZ‖2Calculating to obtain an affinity matrix S, and Z is a representation matrix.
It should be noted that, in this step, besides the above method for calculating the affinity matrix, other methods may also be used for calculation, and this embodiment only exemplifies one of the calculation methods, and this application does not limit this.
S5, calculating and partitioning the affinity matrix by using a spectral clustering algorithm to realize the potential low-rank expression subspace clustering of the data:
the degree matrix of the affinity matrix is calculated using the following formula:
Figure BDA0002380689970000081
wherein the degree matrix is a square matrix, Di,iIs an element of the ith row of the degree matrix, Si,jIs the element of the ith row and the jth column of the affinity matrix;
by using
Figure BDA0002380689970000082
Calculating a normalized laplacian matrix L:
in the formula, D is a degree matrix, S is an affinity matrix, and I is an identity matrix;
calculating the eigenvectors of the Laplace matrix L, and arranging the first k vectors with the largest eigenvalues into a column matrix X ═ X1,x2,…,xk]∈Rn*k
Converting the row vector of the column matrix into a unit vector to obtain a target matrix;
and clustering the target matrix by adopting a K-means clustering method to obtain K clustering results, thereby realizing the potential low-rank expression subspace clustering.
Example 2
In this embodiment 2, a corresponding implementation apparatus is provided for the subspace clustering method of potential low rank representation provided in embodiment 1, so that the method is further more practical. The subspace clustering device for potential low rank representation provided in this embodiment is introduced below, and the subspace clustering device for potential low rank representation described below and the subspace clustering method for potential low rank representation described above may be referred to correspondingly.
As shown in fig. 2, the apparatus includes:
the data preprocessing module is used for acquiring data, preprocessing the data to obtain a feature matrix, and normalizing each feature point in the feature matrix;
an optimization target function construction module, which constructs a target function of potential low-rank expression subspace clustering based on the characteristic matrix, uses Schatten-p norm as a regular term to replace a rank function, and uses lpThe norm is used as a constraint function of an error term, so that an optimization objective function of potential low-rank expression subspace clustering is constructed;
the subspace representation matrix calculation module is used for solving the optimization objective function to obtain a low-rank representation matrix;
the affinity matrix calculation module is used for calculating to obtain an affinity matrix based on the low-rank representation matrix;
and the subspace clustering module is used for calculating and dividing the affinity matrix by utilizing a spectral clustering algorithm to realize the potential low-rank expression subspace clustering of the data.
The functions of the functional modules of the potential low-rank-representation subspace clustering device provided in the embodiment of the present invention may be specifically implemented according to the method in the above method embodiment 1, and the specific implementation process may refer to the related description of the above method embodiment 1, which is not described herein again.
The terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method for subspace clustering of potential low rank representations, comprising the steps of:
s1, acquiring data and preprocessing the data to obtain a feature matrix;
s2, based on the characteristic matrix, using Schatten-p norm as a regular term to replace a rank function and using lpThe norm is used as a constraint function of an error term to construct an optimized objective function of potential low-rank expression subspace clustering;
s3, solving the optimization objective function to obtain a low-rank expression matrix;
s4, calculating to obtain an affinity matrix based on the low-rank representation matrix;
and S5, calculating and partitioning the affinity matrix by using a spectral clustering algorithm to realize the potential low-rank expression subspace clustering of the data.
2. The method for subspace clustering according to claim 1, wherein the step S1 further comprises normalizing each feature point in the feature matrix after obtaining the feature matrix.
3. The method for clustering subspaces of potential low rank representations according to claim 1, wherein the optimization objective function of the subspace clustering of potential low rank representations in step S2 is specifically:
Figure FDA0002380689960000011
s.t.X=XZ+XL+E;
wherein Z is a subspace low rank tableIndicating a matrix, wherein L is a subspace sparse representation matrix, X is the characteristic matrix, E is a reconstruction error matrix, and lambda is a hyperparameter for controlling loss penalty;
Figure FDA0002380689960000012
is Schatten-p norm and is defined as
Figure FDA0002380689960000013
Figure FDA0002380689960000014
Is 1pNorm, defined as
Figure FDA0002380689960000015
Figure FDA0002380689960000016
4. The method for subspace clustering of potential low rank representations according to claim 3, wherein said step S3 specifically comprises the steps of:
introducing an auxiliary variable J, S to the optimization objective function, wherein Z is J, L is S:
Figure FDA0002380689960000017
s.t.X=XZ+XL+E,Z=J,L=S
and converting the constraint condition into an augmented Lagrange function by using a Lagrange multiplier method, and then performing iterative optimization on various variables in the augmented Lagrange function by using an alternating method until convergence so as to obtain a low-rank representation matrix.
5. The method for subspace clustering of potential low rank representations according to claim 4, wherein said step S5 specifically comprises the steps of:
the degree matrix of the affinity matrix is calculated using the following formula:
Figure FDA0002380689960000021
wherein the degree matrix is a square matrix, Di,iIs an element of the ith row of the degree matrix, Si,jIs the element of the ith row and the jth column of the affinity matrix;
by using
Figure FDA0002380689960000022
Calculating a normalized laplacian matrix L:
in the formula, D is a degree matrix, S is an affinity matrix, and I is an identity matrix;
calculating the eigenvectors of the Laplace matrix L, and arranging the first k vectors with the largest eigenvalues into a column matrix X ═ X1,x2,…,xk]∈Rn*k
Converting the row vector of the column matrix into a unit vector to obtain a target matrix;
and clustering the target matrix by adopting a K-means clustering method to obtain K clustering results, thereby realizing the potential low-rank expression subspace clustering.
6. An apparatus for subspace clustering of potential low rank representations, comprising:
the data preprocessing module is used for acquiring data and preprocessing the data to obtain a feature matrix;
an optimization target function construction module, which constructs a target function of potential low-rank expression subspace clustering based on the characteristic matrix, uses Schatten-p norm as a regular term to replace a rank function, and uses lpThe norm is used as a constraint function of an error term, so that an optimization objective function of potential low-rank expression subspace clustering is constructed;
the subspace representation matrix calculation module is used for solving the optimization objective function to obtain a low-rank representation matrix;
the affinity matrix calculation module is used for calculating to obtain an affinity matrix based on the low-rank representation matrix;
and the subspace clustering module is used for calculating and dividing the affinity matrix by utilizing a spectral clustering algorithm to realize the potential low-rank expression subspace clustering of the data.
7. The apparatus according to claim 6, wherein the data preprocessing module is further configured to normalize each feature point in the feature matrix after obtaining the feature matrix.
8. The apparatus for subspace clustering according to claim 6, wherein the optimization objective function of the potential low-rank expression subspace clustering constructed by the optimization objective function construction module is specifically:
Figure FDA0002380689960000023
s.t.X=XZ+XL+E;
in the formula, Z is a subspace low-rank representation matrix, L is a subspace sparse representation matrix, X is the characteristic matrix, E is a reconstruction error matrix, and lambda is a hyperparameter for controlling loss penalty;
Figure FDA0002380689960000031
is Schatten-p norm and is defined as
Figure FDA0002380689960000032
Figure FDA0002380689960000033
Is 1pNorm, defined as
Figure FDA0002380689960000034
Figure FDA0002380689960000035
9. The apparatus for subspace clustering of potential low rank representations according to claim 8, wherein the subspace representation matrix calculation module is further configured to:
introducing an auxiliary variable J, S to the optimization objective function, wherein Z is J, L is S:
Figure FDA0002380689960000036
s.t.X=XZ+XL+E,Z=J,L=S
and converting the constraint condition into an augmented Lagrange function by using a Lagrange multiplier method, and then performing iterative optimization on various variables in the augmented Lagrange function by using an alternating method until convergence, thereby obtaining a low-rank expression matrix.
10. The apparatus for subspace clustering of potential low rank representations according to claim 9, wherein the subspace clustering module is further configured to:
the degree matrix of the affinity matrix is calculated using the following formula:
Figure FDA0002380689960000037
wherein the degree matrix is a square matrix, Di,iIs an element of the ith row of the degree matrix, Si,jIs the element of the ith row and the jth column of the affinity matrix;
by using
Figure FDA0002380689960000038
Calculating a normalized laplacian matrix L:
in the formula, D is a degree matrix, S is an affinity matrix, and I is an identity matrix;
calculating the eigenvectors of the Laplace matrix L, and arranging the first k vectors with the largest eigenvalues into a column matrix X ═ X1,x2,…,xk]∈Rn*k
Converting the row vector of the column matrix into a unit vector to obtain a target matrix;
and clustering the target matrix by adopting a K-means clustering method to obtain K clustering results, thereby realizing the potential low-rank expression subspace clustering.
CN202010082142.2A 2020-02-07 2020-02-07 Subspace clustering method and device for potential low-rank representation Pending CN111310813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010082142.2A CN111310813A (en) 2020-02-07 2020-02-07 Subspace clustering method and device for potential low-rank representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010082142.2A CN111310813A (en) 2020-02-07 2020-02-07 Subspace clustering method and device for potential low-rank representation

Publications (1)

Publication Number Publication Date
CN111310813A true CN111310813A (en) 2020-06-19

Family

ID=71146932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010082142.2A Pending CN111310813A (en) 2020-02-07 2020-02-07 Subspace clustering method and device for potential low-rank representation

Country Status (1)

Country Link
CN (1) CN111310813A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813982A (en) * 2020-07-23 2020-10-23 中原工学院 Data processing method and device based on subspace clustering algorithm of spectral clustering
CN113420464A (en) * 2021-07-22 2021-09-21 西南交通大学 Aisle arrangement method considering robustness
CN113627467A (en) * 2021-07-01 2021-11-09 杭州电子科技大学 Image clustering method based on non-convex approximation low-rank subspace
WO2023065525A1 (en) * 2021-10-22 2023-04-27 西安闻泰信息技术有限公司 Object feature matrix determination method and apparatus, device, and storage medium
CN116310462A (en) * 2023-05-19 2023-06-23 浙江财经大学 Image clustering method and device based on rank constraint self-expression

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400143A (en) * 2013-07-12 2013-11-20 中国科学院自动化研究所 Data subspace clustering method based on multiple view angles
CN106408530A (en) * 2016-09-07 2017-02-15 厦门大学 Sparse and low-rank matrix approximation-based hyperspectral image restoration method
CN108460412A (en) * 2018-02-11 2018-08-28 北京盛安同力科技开发有限公司 A kind of image classification method based on subspace joint sparse low-rank Structure learning
CN109685155A (en) * 2018-12-29 2019-04-26 广东工业大学 Subspace clustering method, device, equipment and storage medium based on multiple view
CN110378365A (en) * 2019-06-03 2019-10-25 广东工业大学 A kind of multiple view Subspace clustering method based on joint sub-space learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400143A (en) * 2013-07-12 2013-11-20 中国科学院自动化研究所 Data subspace clustering method based on multiple view angles
CN106408530A (en) * 2016-09-07 2017-02-15 厦门大学 Sparse and low-rank matrix approximation-based hyperspectral image restoration method
CN108460412A (en) * 2018-02-11 2018-08-28 北京盛安同力科技开发有限公司 A kind of image classification method based on subspace joint sparse low-rank Structure learning
CN109685155A (en) * 2018-12-29 2019-04-26 广东工业大学 Subspace clustering method, device, equipment and storage medium based on multiple view
CN110378365A (en) * 2019-06-03 2019-10-25 广东工业大学 A kind of multiple view Subspace clustering method based on joint sub-space learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FEIPING NIE ET AL: "Joint Schatten p-norm and lp-norm robust matrix completion for missing value recovery", 《KNOWLEDGE INFORMATION SYSTEMS》 *
SONG YU ET AL: "Subspace clustering based on latent low rank representation with Frobenius norm minimization", 《NEUROCOMPUTING》 *
李凯鑫: "基于低秩的子空间聚类算法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813982A (en) * 2020-07-23 2020-10-23 中原工学院 Data processing method and device based on subspace clustering algorithm of spectral clustering
CN113627467A (en) * 2021-07-01 2021-11-09 杭州电子科技大学 Image clustering method based on non-convex approximation low-rank subspace
CN113420464A (en) * 2021-07-22 2021-09-21 西南交通大学 Aisle arrangement method considering robustness
CN113420464B (en) * 2021-07-22 2022-04-19 西南交通大学 Aisle arrangement method considering robustness
WO2023065525A1 (en) * 2021-10-22 2023-04-27 西安闻泰信息技术有限公司 Object feature matrix determination method and apparatus, device, and storage medium
CN116310462A (en) * 2023-05-19 2023-06-23 浙江财经大学 Image clustering method and device based on rank constraint self-expression
CN116310462B (en) * 2023-05-19 2023-08-11 浙江财经大学 Image clustering method and device based on rank constraint self-expression

Similar Documents

Publication Publication Date Title
CN111310813A (en) Subspace clustering method and device for potential low-rank representation
Patel et al. Kernel sparse subspace clustering
Zhang et al. Learning structured low-rank representations for image classification
Ma et al. Sparse representation for face recognition based on discriminative low-rank dictionary learning
Patel et al. Latent space sparse subspace clustering
Li et al. Robust subspace discovery through supervised low-rank constraints
Yi et al. Unified sparse subspace learning via self-contained regression
CN107392107B (en) Face feature extraction method based on heterogeneous tensor decomposition
Li et al. Mutual component analysis for heterogeneous face recognition
CN109543723B (en) Robust image clustering method
Li et al. Common feature discriminant analysis for matching infrared face images to optical face images
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN107045621A (en) Facial expression recognizing method based on LBP and LDA
Abdi et al. Entropy based dictionary learning for image classification
CN108664941B (en) Nuclear sparse description face recognition method based on geodesic mapping analysis
Mahmood et al. Multi-order statistical descriptors for real-time face recognition and object classification
Puthenputhussery et al. A sparse representation model using the complete marginal fisher analysis framework and its applications to visual recognition
CN111062308A (en) Face recognition method based on sparse expression and neural network
Zarbakhsh et al. Low-rank sparse coding and region of interest pooling for dynamic 3D facial expression recognition
CN110633732B (en) Multi-modal image recognition method based on low-rank and joint sparsity
CN108595555B (en) Image retrieval method based on semi-supervised tensor quantum space regression
CN106056131A (en) Image feature extraction method based on LRR-LDA
Yao A compressed deep convolutional neural networks for face recognition
CN107729945A (en) Discriminating recurrence, sorting technique and system based on rarefaction representation between class
Sabzalian et al. Iterative weighted non-smooth non-negative matrix factorization for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619

RJ01 Rejection of invention patent application after publication