CN114254703A - Robust local and global regularization non-negative matrix factorization clustering method - Google Patents
Robust local and global regularization non-negative matrix factorization clustering method Download PDFInfo
- Publication number
- CN114254703A CN114254703A CN202111563605.8A CN202111563605A CN114254703A CN 114254703 A CN114254703 A CN 114254703A CN 202111563605 A CN202111563605 A CN 202111563605A CN 114254703 A CN114254703 A CN 114254703A
- Authority
- CN
- China
- Prior art keywords
- matrix
- global
- local
- robust
- regularized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011159 matrix material Substances 0.000 title claims abstract description 113
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 13
- 230000009466 transformation Effects 0.000 claims abstract description 6
- 238000003064 k means clustering Methods 0.000 claims abstract description 4
- 238000005259 measurement Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 8
- 230000014509 gene expression Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 claims description 3
- 238000009795 derivation Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 4
- 238000003491 array Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 29
- 230000002159 abnormal effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 229940050561 matrix product Drugs 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000013432 robust analysis Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Computational Biology (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Complex Calculations (AREA)
Abstract
The invention relates to the technical field of data processing, in particular to a robust local and global regularized nonnegative matrix factorization clustering method, which comprises the following steps: acquiring an image clustering sample; constructing a nearest neighbor map on local scattering of a sample and introducing smooth regularization; representing the global geometry of the space by using transformation, and taking the global geometry as an additional main component graph regularization item to be incorporated into an NMF algorithm; applying graph regularization term constraints to an original NMF model by joint modeling and utilizing LPThe smoothness constraint restrains the base matrix; using correlation entropy to replace Euclidean norm in the error measurement, thereby obtaining a robust target function of local and global regularization nonnegative matrix decomposition; iterating preset times by using an iterative weighting method according to the target function, updating the variable U, V, and completing robust local and global regularized nonnegative matrix decomposition; using K-means clustering algorithm to pair coefficient momentsThe arrays are clustered.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a robust local and global regularized nonnegative matrix factorization clustering method.
Background
With the development of computer technology, high-dimensional data has been applied to different fields, and people pay more and more attention to data dimension reduction. The dimensionality reduction has wide application, and the single image is converted into a data set in a high-dimensional space through high-dimensionality of the single image data, so that the internal structure and information of multivariate data can be revealed and the multivariate data can be used for subsequent tasks such as visualization, classification and clustering.
non-Negative Matrix Factorization (NMF) has been frequently applied in the fields of pattern recognition, computer vision, and information inspection, as an effective dimension reduction method. The basic idea of non-negative matrix factorization is to find two low-dimensional non-negative matrices to approximate the original high-dimensional matrix, i.e. the original data matrix is reconstructed only by addition operation, which also makes the non-negative matrix factorization have the advantage of being wholly based on partial representation, and the non-negative matrix factorization has become one of the most powerful methods for clustering and feature selection. In order to improve the performance of the original NMF, researchers have developed various NMF extension methods from different aspects, for example, the multiplier alternating direction method (ADMM) is used to optimize the NMF method; using graph regularization nmf (gnmf), the intrinsic geometry of the data space is preserved by constructing a simple graph to account for the pairwise geometric relationships between samples; manifold regularization discrimination NMF (MD-NMF) considers the geometry of the data and the discrimination information … … for the different classes
The above methods all use euclidean norms to minimize the distance between the original data matrix and the reconstructed matrix. However, many data in the real world include gaussian noise, non-gaussian noise (for example, in the process of measuring and collecting gene expression data), or abnormal values, and in practical applications, it is a troublesome problem to effectively process the noise or abnormal values. In this case, the performance of the euclidean norm based non-negative matrix factorization method will be greatly reduced. The correlation entropy is proposed for robust analysis in information theory learning, has been proven to be effective in processing noise and outliers, and is widely applied in the fields of signal processing, biological information, face recognition, and the like. The correlation entropy is a measure of non-linearity and local similarity, the variation of which is related to the probability of similarity of two random variables. Instead of considering only the euclidean norm of the second moment, the correlation entropy may consider a higher order matrix.
Disclosure of Invention
Aiming at the problems, the invention discloses a robust local and global regularized non-negative matrix factorization clustering method, which adds related entropy into an objective function to effectively reduce the influence of noise and abnormal values, and improves the robustness of the robust local and global regularized non-negative matrix factorization clustering method on the noise and the abnormal values; in addition, the method also considers the geometric information of the data by combining a graph regularization term and utilizes LPSmoothness constraints constrain the base matrix to obtain a smooth and more accurate solution.
The invention adopts the following specific technical scheme:
a robust local and global regularized non-negative matrix factorization clustering method comprises the following steps:
s1, acquiring image clustering samples;
s20, constructing a nearest neighbor map on local scattering of the sample and introducing smooth regularization;
s30, representing the global geometry of the space by using transformation, and taking the global geometry as an additional main component graph regularization item to be included in an NMF algorithm;
s40 applying graph regularization term constraints to the original NMF model through joint modeling and utilizing LPThe smoothness constraint restrains the base matrix;
s50, using the correlation entropy to replace the Euclidean norm in the error measurement, thereby obtaining the robust target function of the local and global regularization nonnegative matrix decomposition;
s60, according to the objective function, iterating preset times by using an iterative weighting method, updating a variable U, V, and completing robust local and global regularization nonnegative matrix decomposition;
s70, performing clustering analysis on the coefficient matrix by adopting a K-means clustering algorithm. Compared with the traditional clustering method, the clustering method can more effectively reveal the internal geometric structure and the identification structure of the data, and improves the clustering performance.
Further preferably, in step S20, a nearest neighbor map is constructed on the local scattering of the sample, and a smooth regularization is introduced, where the smooth regularization specifically is:
where Tr (.) represents the trace of the matrix. L is the laplace matrix of the graph, L-D-W, D denotes a diagonal matrix, and each entry in D is WijIs the sum of the rows (or columns, since W is symmetrical), i.e. Dii=∑jWij. W is a weight matrix, the element W in the matrixijThe definition is as follows:
Further preferably, the global geometry of the space is represented by a transformation at step S30 and is included as an additional principal component graph regularization term in the NMF algorithm, specifically, the global scattering on the coding matrix is maximized and defined as:
The equation is further simplified and yields:
where M ═ I-E is called principal component diagram, E ═ 1/n) eeTI is an n × n identity matrix and e is an n-dimensional column vector with elements equal to 1.
Further preferably, the graph regularization term constraints are applied to the original NMF model by joint modeling and utilizing L at step S40PThe smoothness constraint is used for constraining the base matrix, and the obtained objective function of the non-negative matrix decomposition is specifically as follows:
where alpha and beta are two trade-off parameters.
By means of LPThe smoothness constraint constrains the base matrix to obtain a smooth and more accurate solution, specifically:
where λ is a non-negative parameter.
Further preferably, in step S50, the correlated entropy is used in place of the euclidean norm in the error metric, so as to obtain the objective function of the robust local and global regularized non-negative matrix factorization, which is specifically:
the first term of the equation is based on entropy reconstruction error, the second term is a local smoothness graph regular term, the third term represents a global geometry structure graph regular term, and the fourth term utilizes LPSmoothness constraints constrain the base matrix.
Further preferably, in step S60, iterating for a preset number of times by using an iterative weighting method according to the objective function, updating the variable U, V, and completing the non-negative matrix component of robust local and global regularization, the method includes:
establishing a Lagrangian function L according to the objective function of the non-negative matrix factorization:
L=Tr(XXT)-2Tr(XVUT)+Tr(UVTVUT)
+αTr(VLVT)-βTr(VMVT)+2λ||U||P
+Tr(ΨUT)+Tr(ΦVT)
wherein Ψ ═ ψik],Φ=[φjk];
Respectively solving the partial derivatives of the basic matrix U and the coefficient matrix V by using the Karush-Kuhn-Tucker conditionφjkνjk0 yields the iterative equations of the basis matrix U and the coefficient matrix V, respectively.
The update rule of the variable U is as follows:
the update rule of the variable V is as follows:
further preferably, the matrix U and the matrix V are separately subjected to partial derivation in step (step) using the Karush-Kuhn-Tucker conditionφjkνjkObtaining an iterative equation of the basis matrix U and the coefficient matrix V as 0, further including:
performing loop iteration according to the initialization weight, the basic matrix U and the iterative expression of the coefficient matrix V;
and after the cycle reaches the preset iteration times t, outputting a basic matrix U and a coefficient matrix V to finish the robust local and global regularized nonnegative matrix decomposition.
The invention has the beneficial effects that: compared with the traditional image discrimination method, the method adds the related entropy into the target function, is beneficial to capturing the high-order moment of the data, effectively reduces the influence of noise and abnormal values, and enhances the robustness of nonnegative matrix decomposition; the invention considers the geometric information of the data by combining the graph regularization term and utilizes LPThe smoothness constraint restrains the base matrix, so that a smooth and more accurate solution is obtained, and better identification precision is achieved; in addition, the present invention requires little computation time in terms of computational complexity to achieve the highest recognition accuracy on the database.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
For the purpose of enhancing the understanding of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and examples, which are provided for the purpose of illustration only and are not intended to limit the scope of the present invention.
As shown in FIG. 1, a robust local and global regularized non-negative matrix factorization clustering method, which is based on graph regularized non-negative matrix factorization, includes the following steps:
step S1: acquiring an image clustering sample;
step S20: constructing a nearest neighbor map on the local scatter of the sample obtained in the step S1 and introducing smooth regularization;
step S30: representing the global geometry of the space by using transformation, and taking the global geometry as an additional main component graph regularization item to be incorporated into an NMF algorithm;
step S40: applying graph regularization term constraints to an original NMF model by joint modeling and utilizing LPThe smoothness constraint restrains the base matrix;
step S50: using correlation entropy to replace Euclidean norm in the error measurement, thereby obtaining a robust target function of local and global regularization nonnegative matrix decomposition;
step S60: iterating preset times by using an iterative weighting method according to the target function, updating the variable U, V, and completing robust local and global regularized nonnegative matrix decomposition;
step S70: and (5) carrying out clustering analysis on the coefficient matrix by adopting a K-means clustering algorithm.
The image data can be regarded as given an m × n matrix X ═ X1,x2,…,xn]∈Rm×nWherein x isi(i e {1,2, …, n }) represents a sample of the data, and the NMF method aims to decompose the data matrix X into basis matricesSum coefficient matrixProduct of (i.e. X ≈ UV)TK < min (m, n). Frobenius norm is adopted to measure original matrix X and UVTThe objective function of NMF can be expressed as follows:
wherein | · | purpleFRepresenting the Frobenius norm.
Constructing nearest neighbor maps on the local scatter of the sample can effectively model the local geometry, thus introducing smooth regularization on the underlying (local geometry representing the sample in space)iIs xiIn the low-dimensional representation in the new coding matrix, we can use euclidean distances to measure the smoothness between the low-dimensional representations:
where Tr (.) represents the trace of the matrix. L is the Laplace matrix of the graph, L-D-W, D represents a diagonal matrix, and D is the matrixEach term of (1) is WijIs the sum of the rows (or columns, since W is symmetrical), i.e. Dii=∑jWij. W is a weight matrix, the element W in the matrixijThe definition is as follows:
The global geometry of the space is represented using a transformation and incorporated into the NMF algorithm as an additional principal component graph regularization term. In particular, global scattering on the coding matrix is maximized and defined as
Equation (4) is further simplified and yields:
where M ═ I-E is called principal component diagram, E ═ 1/n) eeTI is an n × n identity matrix and e is an n-dimensional column vector with elements equal to 1.
Applying graph regularization term constraint to the original NMF model through joint modeling (2) and (5), and defining the obtained objective function of non-negative matrix decomposition as:
where alpha and beta are two trade-off parameters.
Finally, using LPSmoothness constraints constrain the base matrix to obtain a smooth and more accurate solution:
where λ is a non-negative parameter.
In practical applications, it is a troublesome problem to effectively deal with noise or abnormal values. The correlation entropy is used in place of the euclidean norm in the error metric to improve the robustness of the algorithm:
entropy is a measure of the non-linearity and local similarity of two random variables x and y. The definition is as follows:
C(x,y)=E[k(x,y)] (8)
e [ ] and k (,) are the desired operator and kernel functions, respectively, that satisfy Mercer's theory.
The invention uses the Gaussian kernel function as the kernel function of the correlation entropy to obtain the following formula:
where σ is the core bandwidth parameter, σ > 0. If x and y are vectors, then the Gaussian kernel function is:
since the joint distribution function of the random variables x and y is usually unknown, the available data samplesIs limited. The associated entropy of a sample can be estimated using the following formula:
the maximization of formula (11) is called maximum entropy criterion (MCC), which has the advantage that higher order matrices can be considered, and therefore, adding the maximum correlation entropy criterion in non-negative matrix factorization can make the algorithm more robust in handling outliers and noise.
Using corentropy instead of euclidean norm to improve the robustness of the algorithm, the entropy objective function can be written as follows:
obviously, the objective function of entropy is non-quadratic and non-convex, and it is difficult to directly optimize the solution. The HQ technique based on the convex conjugate function theory can effectively solve the above optimization problem. It converts the relevant entropy terms in the objective function into quadratic terms in multiplicative form. By using the properties of the convex conjugate function, it is defined as follows:
the convex conjugate function φ () where g (x) exists is such that:
wherein if x has been determined, the above formula will reach a maximum at z ═ -g (x).
By definition, substituting equation (13) into the objective function, one can obtain:
wherein z is [ z ]1,…,zM]TRepresenting the auxiliary vector. Maximizing the augmented objective function relative to z by fixing U and V yields:
the kernel bandwidth parameter σ is updated according to the following iterative equation:
the objective function can be rewritten as a maximization problem as follows:
for ease of computation, the objective function is equivalently transformed into the following minimization problem:
because the robust local and global regularized non-negative matrix factorization clustering method provided by the invention is non-convex and cannot find a global optimal solution, an alternative iteration strategy can be adopted to solve the robust local and global regularized non-negative matrix factorization clustering method model so as to obtain a local optimal solution. Thus, the objective function can be written in the form:
wherein Q represents a diagonal matrix with diagonal elements QiiExpressed as:
a multiplicative iterative algorithm is used to solve equation f. Constraint U for U and V respectivelyik≥0,vjkNot less than 0, introducing Lagrange coefficient psiikAnd phijkLet psi ═ psiik],Φ=[φjk]The lagrange function L can be expressed as:
L=Tr(XXT)-2Tr(XVUT)+Tr(UVTVUT) +αTr(VLVT)-βTr(VMVT)+2λ||U||P (21) +Tr(ΨUT)+Tr(ΦVT)
the partial derivatives of U and V can be found:
from L ═ D-W, M ═ I-E, the updated rule for variable U, V is derived:
the update rule of the variable U is as follows:
the update rule of the variable V is as follows:
the foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (7)
1. A robust local and global regularized non-negative matrix factorization clustering method is characterized by comprising the following steps:
step S1: acquiring an image clustering sample;
step S20: constructing a nearest neighbor map on the local scatter of the sample obtained in the step S1 and introducing smooth regularization;
step S30: representing the global geometry of the space by using transformation, and taking the global geometry as an additional main component graph regularization item to be incorporated into an NMF algorithm;
step S40: applying graph regularization term constraints to an original NMF model by joint modeling and utilizing LPThe smoothness constraint restrains the base matrix;
step S50: using correlation entropy to replace Euclidean norm in the error measurement, thereby obtaining a robust target function of local and global regularization nonnegative matrix decomposition;
step S60: iterating preset times by using an iterative weighting method according to the target function, updating the variable U, V, and completing robust local and global regularized nonnegative matrix decomposition;
step S70: and (5) carrying out clustering analysis on the coefficient matrix by adopting a K-means clustering algorithm.
2. The robust local and global regularized non-negative matrix factorization clustering method according to claim 1, wherein in the step S20, the smooth regularization is specifically:
where Tr (·) denotes the trace of the matrix, L is the laplacian matrix of the graph, L-D-W, D denotes a diagonal matrix, and each entry in D is WijOf rows (a) or (b), i.e. Dii=∑jWijW is a weight matrix, the element W in the matrixijThe definition is as follows:
3. A robust local and global regularized non-negative matrix factorization clustering method as claimed in claim 2, wherein in said step S30, the global scattering on the coding matrix is maximized and defined as:
the equation is further simplified and yields:
where M ═ I-E is called principal component diagram, E ═ 1/n) eeTI is an n × n identity matrix and e is an n-dimensional column vector with elements equal to 1.
4. The robust local and global regularized non-negative matrix factorization clustering method according to claim 3, wherein in the step S40, the obtained objective function of the non-negative matrix factorization is specifically:
where α and β are two trade-off parameters;
by means of LPThe smoothness constraint constrains the base matrix to obtain a smooth and more accurate solution, specifically:
where λ is a non-negative parameter.
5. The robust local and global regularized non-negative matrix factorization clustering method according to claim 4, wherein in the step S50, an objective function of the robust local and global regularized non-negative matrix factorization is specifically:
the first term of the equation is based on entropy reconstruction error, the second term is a local smoothness graph regular term, the third term represents a global geometry structure graph regular term, and the fourth term utilizes LPSmoothness constraints constrain the base matrix.
6. The robust local and global regularized non-negative matrix factorization clustering method according to claim 5, wherein in said step S60, comprising:
establishing a Lagrangian function L according to the objective function of the non-negative matrix factorization:
L=Tr(XXT)-2Tr(XVUT)+Tr(UVTVUT)+αTr(VLVT)-βTr(VMVT)+2λ||U||P+Tr(ΨUT)+Tr(ΦVT)
wherein Ψ ═ ψik],Φ=[φjk];
Respectively solving the partial derivatives of the basic matrix U and the coefficient matrix V by using the Karush-Kuhn-Tucker conditionφjkνjkObtaining the iterative expressions of the basic matrix U and the coefficient matrix V respectively as 0:
the update rule of the variable U is as follows:
the update rule of the variable V is as follows:
7. the robust local and global regularized non-negative matrix factorization clustering method according to claim 6, wherein in the step S70, the matrix U and the matrix V are separately subjected to partial derivation, and Karush-Kuhn-Tucker condition is usedφjkνjkObtaining an iterative equation of the basis matrix U and the coefficient matrix V as 0, further including:
performing loop iteration according to the initialization weight, the basic matrix U and the iterative expression of the coefficient matrix V;
and after the cycle reaches the preset iteration times t, outputting a basic matrix U and a coefficient matrix V to finish the robust local and global regularized nonnegative matrix decomposition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111563605.8A CN114254703A (en) | 2021-12-20 | 2021-12-20 | Robust local and global regularization non-negative matrix factorization clustering method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111563605.8A CN114254703A (en) | 2021-12-20 | 2021-12-20 | Robust local and global regularization non-negative matrix factorization clustering method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114254703A true CN114254703A (en) | 2022-03-29 |
Family
ID=80793172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111563605.8A Pending CN114254703A (en) | 2021-12-20 | 2021-12-20 | Robust local and global regularization non-negative matrix factorization clustering method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114254703A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115577564A (en) * | 2022-11-11 | 2023-01-06 | 江西师范大学 | Robust nonnegative matrix decomposition method and system for multi-constraint adaptive graph learning |
CN115810108A (en) * | 2022-09-23 | 2023-03-17 | 南京审计大学 | Image feature extraction method in big data audit based on REGNMF |
CN118197342A (en) * | 2024-05-15 | 2024-06-14 | 电子科技大学中山学院 | High-precision audio signal denoising method based on improved NMF and K-means++ |
-
2021
- 2021-12-20 CN CN202111563605.8A patent/CN114254703A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115810108A (en) * | 2022-09-23 | 2023-03-17 | 南京审计大学 | Image feature extraction method in big data audit based on REGNMF |
CN115810108B (en) * | 2022-09-23 | 2023-08-08 | 南京审计大学 | Image feature extraction method in big data audit based on REGNMF |
CN115577564A (en) * | 2022-11-11 | 2023-01-06 | 江西师范大学 | Robust nonnegative matrix decomposition method and system for multi-constraint adaptive graph learning |
CN115577564B (en) * | 2022-11-11 | 2023-08-22 | 江西师范大学 | Robust non-negative matrix factorization method and system for multi-constraint adaptive graph learning |
CN118197342A (en) * | 2024-05-15 | 2024-06-14 | 电子科技大学中山学院 | High-precision audio signal denoising method based on improved NMF and K-means++ |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114254703A (en) | Robust local and global regularization non-negative matrix factorization clustering method | |
JP5072693B2 (en) | PATTERN IDENTIFICATION DEVICE AND ITS CONTROL METHOD, ABNORMAL PATTERN DETECTION DEVICE AND ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM | |
Bunte et al. | Adaptive local dissimilarity measures for discriminative dimension reduction of labeled data | |
Xie et al. | Learning sparse frame models for natural image patterns | |
CN105608478B (en) | image feature extraction and classification combined method and system | |
CN102682306B (en) | Wavelet pyramid polarization texture primitive feature extracting method for synthetic aperture radar (SAR) images | |
US20150074130A1 (en) | Method and system for reducing data dimensionality | |
US6993189B2 (en) | System and method to facilitate pattern recognition by deformable matching | |
CN109583380B (en) | Hyperspectral classification method based on attention-constrained non-negative matrix factorization | |
CN109657611A (en) | A kind of adaptive figure regularization non-negative matrix factorization method for recognition of face | |
CN112465062A (en) | Clustering method based on manifold learning and rank constraint | |
Wu et al. | Learning the nonlinear geometry of high-dimensional data: Models and algorithms | |
CN109840545A (en) | A kind of robustness structure Non-negative Matrix Factorization clustering method based on figure regularization | |
Lu et al. | Robust low-rank representation with adaptive graph regularization from clean data | |
Zhao et al. | Two‐Phase Incremental Kernel PCA for Learning Massive or Online Datasets | |
Wei et al. | Spectral clustering steered low-rank representation for subspace segmentation | |
CN108009586B (en) | Capping concept decomposition method and image clustering method | |
Al-Sharadqah et al. | New methods for detecting concentric objects with high accuracy | |
CN112396089B (en) | Image matching method based on LFGC network and compression excitation module | |
Thorstensen et al. | Pre-Image as Karcher Mean using Diffusion Maps: Application to shape and image denoising | |
CN114648789A (en) | Face feature dimension reduction extraction method and system | |
Gao et al. | Gabor texture in active appearance models | |
Karami et al. | Variational inference for deep probabilistic canonical correlation analysis | |
CN114266311A (en) | Non-negative matrix factorization clustering method for local identification structure maintenance | |
CN111639685A (en) | Feature selection method based on flexible manifold embedding and structure diagram optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |