CN109409422A - An a kind of step Spectral Clustering based on spectrum rotation - Google Patents
An a kind of step Spectral Clustering based on spectrum rotation Download PDFInfo
- Publication number
- CN109409422A CN109409422A CN201811187977.3A CN201811187977A CN109409422A CN 109409422 A CN109409422 A CN 109409422A CN 201811187977 A CN201811187977 A CN 201811187977A CN 109409422 A CN109409422 A CN 109409422A
- Authority
- CN
- China
- Prior art keywords
- matrix
- follows
- cluster
- study
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003595 spectral effect Effects 0.000 title claims abstract description 37
- 238000001228 spectrum Methods 0.000 title claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims abstract description 76
- 238000000034 method Methods 0.000 claims abstract description 17
- 238000005457 optimization Methods 0.000 claims abstract description 9
- 230000007704 transition Effects 0.000 claims abstract description 9
- 230000009467 reduction Effects 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 24
- 238000012360 testing method Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 239000012141 concentrate Substances 0.000 claims description 3
- 230000001419 dependent effect Effects 0.000 claims description 3
- 238000012804 iterative process Methods 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 6
- 238000013480 data collection Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an a kind of step Spectral Clusterings based on spectrum rotation, it is related to computer big data information technology field, the technical issues of solution, is to provide a kind of spectral clustering step and simplifies and cluster the high Spectral Clustering of accuracy rate, this method is by by the study of relational matrix, the study of spectral representation, the optimization of k-means cluster and the study of transition matrix are integrated into a frame, learn relational matrix using the low-dimensional feature space after the reduction dimension of raw data set, preferably clustering hyperplane is found by the rotation to original k-means result, obtain preferably clustering result.The present invention has simplified spectral clustering step, the cluster time complexity to big data be it is linear, relate only to simple mathematical model when writing code, easy to implement and cluster accuracy rate is high.
Description
Technical field
The present invention relates to computer big data information technology field more particularly to an a kind of step spectral clusterings based on spectrum rotation
Method.
Background technique
With the rapid development of internet especially mobile Internet, a large amount of data are constantly collected and arrange.When
The main research of preceding big data Knowledge Discovery includes: division, cluster, retrieval, incremental learning, this four aspects.And it clusters because of it
It can help to find the hiding information in big data and have become a hot topic of research.
Spectral clustering can be clustered on the sample space of arbitrary shape with it and converge on the overall situation in numerous clustering methods
Optimal solution and become research popular direction.Prior art Spectral Clustering is generally divided into three big steps, and relational matrix constructed before this,
Followed by the study of spectral representation, clustering finally is carried out to obtained spectral representation using spectrum division methods, by using k-
The matrix that means constitutes the preceding d feature vector after Laplacian Matrix progress Eigenvalues Decomposition clusters, as most
Cluster result afterwards.For the Spectral Clustering of the prior art, the relational matrix for constructing new reliable high quality is that its is heavy
The step wanted, and the relational matrix of prior art Spectral Clustering building is obtained from original European feature space,
It cannot accurately reflect very much the true relationship between data, and then subsequent processing is carried out with this relational matrix and is unable to get standard
True Subspace partition.In addition to this, it is not true for carrying out the selected division plane of last clustering using k-means
The more excellent division plane of real data collection distribution, thus have large effect to cluster accuracy rate.
Summary of the invention
In view of the deficiencies of the prior art, technical problem solved by the invention is to provide a kind of spectral clustering step and simplifies and gather
The high Spectral Clustering of class accuracy rate.
In order to solve the above technical problems, a technical solution adopted by the present invention is that a kind of step spectral clustering side based on spectrum rotation
Method, by the way that the study of relational matrix, the study of spectral representation, the optimization of k-means cluster and the study of transition matrix to be integrated into
In one frame, learn relational matrix using the low-dimensional feature space after the reduction dimension of raw data set, by original
Preferably clustering hyperplane is found in the rotation of k-means result, obtains preferably clustering as a result, including following step
It is rapid:
(1) by the study of relational matrix, the study of spectral representation, the optimization of k-means cluster and the study collection of transition matrix
At into a frame, function of setting objectives, detailed process is as follows:
It sets objectives function are as follows:Corresponding constraint condition are as follows:
s.t.,Y∈{0,1},yi1=1, RTR=I, S ∈ S, WTXTXW=I;
Wherein, X is training set, xiIndicate that i-th of sample of data set, W are the coefficient matrix for needing the attribute learnt, S
It is the relational matrix between training set sample;
The first item of objective functionAnd Section 2It is in order in the low-dimensional of raw data set spy
Sign learns the relational matrix that can preferably indicate relationship between sample out in space, can eliminate initial data and concentrate possible noise
The not accurate enough problem of data bring cluster result, while learning to arrive better spectral representation;Section 3Be in order to
The result of prediction is set to be more nearly true cluster result to improve cluster accuracy rate by composing rotation;The first item of constraint condition
Y∈{0,1},yi1=1 be in order to allow Y become the i.e. matrix of oriental matrix every a line only one 1, remaining element is 0 square
Battle array;Section 2 RTR=I is to facilitate subsequent clustering to make the sample after projection as separated as possible;Section 3 S ∈ S is
For the value of restriction matrix;Section 4 WTXTXW=I is in order to which the sample XW for constituting dimensionality reduction newly is by rectangular projection
Mode obtains, and the spectral representation made is more rationally accurate;
(2) objective function is solved, obtains cluster result, specifically as follows step by step:
1) matrix S, W, Y, R are initialized and provides an initial value for entire iterative process, wherein S uses heat kernel function structure
It builds, W is the matrix of a completely random, and Y is a random oriental matrix, and R is a unit matrix;
2) its dependent variable is fixed, W is updated using ADMM frame, XW-Z=0 is set, and objective function is as follows at this time:
Wherein, Z is the new variable in order to optimize W introducing, and U is using the residual error item after method of Lagrange multipliers, upper right
The number of footmark T expression iteration;
The mode for updating W is as follows:
3) other variable updates Y is fixed, objective function is as follows at this time:
Wherein, bottom right footmark F expression limits error using F norm, so that the prediction error of each sample
Consideration is arrived;
The mode for updating Y is as follows:
Wherein, yi,jIndicate the value of each element in oriental matrix Y, G is one complete 1 matrix, and j is to indicate to make G-XWR
The value of the smallest k;
4) other variable updates R is fixed, objective function is as follows at this time:
The mode for updating R is as follows:
R=JMT,WTXTY=J Σ MT;
Wherein, J, M are indicated to WTXTY carries out the unitary matrice of left and right two after Eigenvalues Decomposition;
5) other variable updates S is fixed, objective function is as follows at this time:
Wherein, α, β indicate to adjust the real number coefficient of error and limit entry, and W is the coefficient matrix of sample, and S is the phase of sample
Like degree matrix;
The mode for updating S is as follows:
Wherein, θ indicates to introduce Lagrange multiplier item, and what ρ was indicated is the coefficient for adjusting the multiplier item;
Step by step 5) 6) Y stablizes, and repeats step by step 2) to, until the results change calculated twice be less than given threshold value or
The number of iterations, which reaches given threshold value, terminates iteration, and Y at this time is the result clustered;
7) cluster result Y is exported, is equally to first carry out 1) to initialize S, W, Y, R matrix step by step for test data set
Afterwards, it repeats step by step 2) to 5), until result is stablized, obtained Y matrix is exactly the cluster result of test set step by step.
Compared with prior art, the invention has the advantages that:
This invention simplifies the study of study, the spectral representation of the relational matrix of prior art spectral clustering needs, k-means are poly-
Three steps of the study of the optimization and transition matrix of class, it is only necessary to which a step can be obtained by cluster result, to big data
Cluster time complexity be it is linear, some simple mathematical models are related only to when writing code, it is easy to implement, and substep
It is rapid that the architectural characteristic for keeping raw data set sample in low-dimensional feature space 2) is considered into calculating process step by step 5),
Preferably hyperplane is considered when rotating to k-means result simultaneously, therefore clusters accuracy rate and is ensured.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is that monk data set restrains effect picture.
Specific embodiment
A specific embodiment of the invention is further described with reference to the accompanying drawings and examples, but is not to this hair
Bright restriction.
Embodiment:
Illustrate specific implementation process of the invention by taking the data set monk of UCI as an example, which is people to survey
One of examination monk's Resolving probiems effect is added to the artificial data collection of noise, and inside includes one group of same alike result spatially
Three artificial fields.The data and have 432 samples, attribute dimensions were 6 (each field is illustrated with two dimensions), sample it is true
Real classification is 2 classes.The data acquisition system can detect algorithm of the invention to the compatibility of noise because being added to noisy samples well
Ability.
Fig. 1 shows an a kind of step Spectral Clustering based on spectrum rotation, by by the study of relational matrix, spectral representation
Study, the optimization of k-means cluster and the study of transition matrix are integrated into a frame, are tieed up using the reduction of raw data set
Low-dimensional feature space after degree learns relational matrix, is preferably clustered by the rotation to original k-means result to find
Hyperplane is divided, obtains preferably clustering as a result, including the following steps:
(1) by the study of relational matrix, the study of spectral representation, the optimization of k-means cluster and the study collection of transition matrix
At into a frame, function of setting objectives, detailed process is as follows:
It sets objectives function are as follows:Corresponding constraint condition are as follows:
s.t.,Y∈{0,1},yi1=1, RTR=I, S ∈ S, WTXTXW=I;
Wherein, X is training set, xiIndicate that i-th of sample of data set, W are the coefficient matrix for needing the attribute learnt, S
It is the relational matrix between training set sample;
The first item of objective functionAnd Section 2It is in order in the low-dimensional of raw data set spy
Sign learns the relational matrix that can preferably indicate relationship between sample out in space, can eliminate initial data and concentrate possible noise
The not accurate enough problem of data bring cluster result, while learning to arrive better spectral representation;Section 3Be in order to
The result of prediction is set to be more nearly true cluster result to improve cluster accuracy rate by composing rotation;The first item of constraint condition
Y∈{0,1},yi1=1 be in order to allow Y become the i.e. matrix of oriental matrix every a line only one 1, remaining element is 0 square
Battle array;Section 2 RTR=I is to facilitate subsequent clustering to make the sample after projection as separated as possible;Section 3 S ∈ S is
For the value of restriction matrix;Section 4 WTXTXW=I is in order to which the sample XW for constituting dimensionality reduction newly is by rectangular projection
Mode obtains, and the spectral representation made is more rationally accurate;
(2) objective function is solved, obtains cluster result, specifically as follows step by step:
1) matrix S, W, Y, R are initialized and provides an initial value for entire iterative process, wherein S uses heat kernel function structure
It builds, W is the matrix of a completely random, and Y is a random oriental matrix, and R is a unit matrix;
2) its dependent variable is fixed, W is updated using ADMM frame, XW-Z=0 is set, and objective function is as follows at this time:
Wherein, Z is the new variable in order to optimize W introducing, and U is using the residual error item after method of Lagrange multipliers, upper right
The number of footmark T expression iteration;
The mode for updating W is as follows:
3) other variable updates Y is fixed, objective function is as follows at this time:
Wherein, bottom right footmark F expression limits error using F norm, so that the prediction error of each sample
Consideration is arrived;
The mode for updating Y is as follows:
Wherein, yi,jIndicate the value of each element in oriental matrix Y, G is one complete 1 matrix, and j is to indicate to make G-XWR
The value of the smallest k;
4) other variable updates R is fixed, objective function is as follows at this time:
The mode for updating R is as follows:
R=JMT,WTXTY=J Σ MT;
Wherein, J, M are indicated to WTXTY carries out the unitary matrice of left and right two after Eigenvalues Decomposition;
5) other variable updates S is fixed, objective function is as follows at this time:
Wherein, α, β indicate to adjust the real number coefficient of error and limit entry, and W is the coefficient matrix of sample, and S is the phase of sample
Like degree matrix;
The mode for updating S is as follows:
Wherein, θ indicates to introduce Lagrange multiplier item, and what ρ was indicated is the coefficient for adjusting the multiplier item;
Step by step 5) 6) Y stablizes, and repeats step by step 2) to, until the results change calculated twice be less than given threshold value or
The number of iterations, which reaches given threshold value, terminates iteration, and Y at this time is the result clustered;
7) cluster result Y is exported, is equally to first carry out 1) to initialize S, W, Y, R matrix step by step for test data set
Afterwards, it repeats step by step 2) to 5), until result is stablized, obtained Y matrix is exactly the cluster result of test set step by step.
Embodiment:
Illustrate the specific implementation process of step of the present invention (2) by taking the data set monk of UCI as an example, which is people
In order to test one of the monk's Resolving probiems effect artificial data collection for being added to noise, inside include same alike result spatially
One group of three artificial field.The data and there are 432 samples, attribute dimensions are 6 (each field is illustrated with two dimensions), sample
This true classification is 2 classes.The data acquisition system can detect algorithm of the invention to noise because being added to noisy samples well
Compatibility.
It is as follows to initialize S, W, R, Y: (the iteration termination condition of setting is the variation of iteration twice less than 10-5Or iteration
Number has reached 50 times and has just stopped iteration)
S=[0,0 ..., 0;0,0,...,0;...;0,0,...,0;] (432 × 432 full 0 matrixes)
W=[0.2934,0.4656;0.7553,0.4472;0.4062,0.2231;0.1246,0.9369
;0.6528,0.2256;0.9146,0.7633] (6 × 2 random matrixes)
R=[1,0;0,1] (2 × 2 unit matrixs)
Y=[1,0;1,0;...;0,1]
Result after first time iteration:
S=[0.0023,0.0023 ..., 0.0023;0.0023,0.0023,...,0.0023;...;0.0023,
0.0023,...,0.0023]
W=[- 0.0168,0.0212;0.0211,-0.0157;0.0307,-0.0202;0.0047,
3.0093e-4;-0.0190,0.0217;0.0109,-9.5575e-04]
R=[1.0000, -6.2747e-05;6.2747e-05,1.0000]
Y=[1,0;1,0;...;0,1]
Reach the result after stablizing:
S=[0.0023,0.0023 ..., 0.0023;0.0023,0.0023,...,
0.0023;...;
0.0023,0.0023,...,0.0023]
W=[0.0085, -0.0035;0.0085,-0.0035;0.0166,-0.0066;0.0085,
-0.0035;
-0.0270,0.0298;0.0166,-0.0066]
R=[1.0000,5.9523e-04;-5.9523e-04,1.0000]
Y=[1,0;1,0;...;0,1]
Cluster accuracy rate is calculated at this time are as follows: 0.6667.
Fig. 2 shows monk data sets to restrain effect, as can be seen from Figure the present invention the 8th iteration just very
It stabilizes, or even if in order to pursue faster convergence rate, it is just fine in sixth iteration effect.Quickly convergence
Program can be allowed to be quickly obtained cluster result, the time needed for reducing cluster, for applied to the big data in real life
Processing provides possibility.
Compared with prior art, beneficial effects of the present invention:
This invention simplifies the study of study, the spectral representation of the relational matrix of prior art spectral clustering needs, k-means are poly-
Three steps of the study of the optimization and transition matrix of class, it is only necessary to which a step can be obtained by cluster result, to big data
Cluster time complexity be it is linear, some simple mathematical models are related only to when writing code, it is easy to implement, and substep
It is rapid that the architectural characteristic for keeping raw data set sample in low-dimensional feature space 2) is considered into calculating process step by step 5),
Preferably hyperplane is considered when rotating to k-means result simultaneously, therefore clusters accuracy rate and is ensured.
Detailed description is made that embodiments of the present invention in conjunction with the accompanying drawings and embodiments above, but the present invention is not limited to
Described embodiment.To those skilled in the art, without departing from the principles and spirit of the present invention, right
These embodiments progress various change, modification, replacement and variant are still fallen in protection scope of the present invention.
Claims (3)
1. an a kind of step Spectral Clustering based on spectrum rotation, which is characterized in that by by the study of relational matrix, spectral representation
Study, the optimization of k-means cluster and the study of transition matrix are integrated into a frame, are tieed up using the reduction of raw data set
Low-dimensional feature space after degree learns relational matrix, is preferably clustered by the rotation to original k-means result to find
Hyperplane is divided, obtains preferably clustering as a result, including the following steps:
(1) study of relational matrix, the study of spectral representation, the optimization of k-means cluster and the study of transition matrix are integrated into
In one frame, function of setting objectives;
(2) objective function is solved, obtains cluster result.
2. it is according to claim 1 based on spectrum rotation a step Spectral Clustering, which is characterized in that step (1) it is specific
Process is as follows:
It sets objectives function are as follows:Corresponding constraint condition are as follows: s.t., Y ∈
{0,1},yi1=1, RTR=I, S ∈ S, WTXTXW=I;
Wherein, X is training set, xiIndicate that i-th of sample of data set, W are the coefficient matrixes for needing the attribute learnt, S is trained
Collect the relational matrix between sample;
The first item of objective functionAnd Section 2It is in order in the low-dimensional feature space of raw data set
The middle relational matrix for learning preferably indicate relationship between sample out can eliminate initial data and concentrate possible noise data band
The not accurate enough problem of the cluster result come, while learning to arrive better spectral representation;Section 3It is to pass through spectrum
Rotation makes the result of prediction be more nearly true cluster result to improve cluster accuracy rate;Constraint condition first item Y ∈ 0,
1},yi1=1 be in order to allow Y become the i.e. matrix of oriental matrix every a line only one 1, remaining element is 0 matrix;The
Binomial RTR=I is to facilitate subsequent clustering to make the sample after projection as separated as possible;Section 3 S ∈ S be in order to
The value of restriction matrix;Section 4 WTXTXW=I is in order to which the sample XW for constituting dimensionality reduction newly is in the way of rectangular projection
It obtains, the spectral representation made is more rationally accurate.
3. the step Spectral Clustering according to claim 1 or 2 based on spectrum rotation, which is characterized in that step (2) is specific
It is as follows step by step:
1) matrix S, W, Y, R are initialized and provides an initial value for entire iterative process, wherein S is constructed using heat kernel function, W
It is the matrix of a completely random, Y is a random oriental matrix, and R is a unit matrix;
2) its dependent variable is fixed, W is updated using ADMM frame, XW-Z=0 is set, and objective function is as follows at this time:
Wherein, Z is the new variable in order to optimize W introducing, and U is using the residual error item after method of Lagrange multipliers, upper right footmark T
Indicate the number of iteration;
The mode for updating W is as follows:
3) other variable updates Y is fixed, objective function is as follows at this time:
Wherein, bottom right footmark F expression limits error using F norm, so that the prediction error of each sample obtains
Consider;
The mode for updating Y is as follows:
Wherein, yi,jIndicate the value of each element in oriental matrix Y, G is one complete 1 matrix, and j is to indicate to keep G-XWR minimum
K value;
4) other variable updates R is fixed, objective function is as follows at this time:
The mode for updating R is as follows:
R=JMT,WTXTY=J Σ MT;
Wherein, J, M are indicated to WTXTY carries out the unitary matrice of left and right two after Eigenvalues Decomposition;
5) other variable updates S is fixed, objective function is as follows at this time:
Wherein, α, β indicate to adjust the real number coefficient of error and limit entry, and W is the coefficient matrix of sample, and S is the similarity of sample
Matrix;
The mode for updating S is as follows:
Wherein, θ indicates to introduce Lagrange multiplier item, and what ρ was indicated is the coefficient for adjusting the multiplier item;
Step by step 5) 6) Y stablizes, and repeats step by step 2) to, until the results change calculated twice is less than given threshold value or iteration
Number, which reaches given threshold value, terminates iteration, and Y at this time is the result clustered;
7) cluster result Y is exported, is equally weight after first carrying out step by step 1) initialization S, W, Y, R matrix for test data set
It is multiple to execute step by step 2) to 5), until result is stablized, obtained Y matrix is exactly the cluster result of test set step by step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811187977.3A CN109409422A (en) | 2018-10-12 | 2018-10-12 | An a kind of step Spectral Clustering based on spectrum rotation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811187977.3A CN109409422A (en) | 2018-10-12 | 2018-10-12 | An a kind of step Spectral Clustering based on spectrum rotation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109409422A true CN109409422A (en) | 2019-03-01 |
Family
ID=65467766
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811187977.3A Pending CN109409422A (en) | 2018-10-12 | 2018-10-12 | An a kind of step Spectral Clustering based on spectrum rotation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109409422A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175631A (en) * | 2019-04-28 | 2019-08-27 | 南京邮电大学 | A kind of multiple view clustering method based on common Learning Subspaces structure and cluster oriental matrix |
-
2018
- 2018-10-12 CN CN201811187977.3A patent/CN109409422A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175631A (en) * | 2019-04-28 | 2019-08-27 | 南京邮电大学 | A kind of multiple view clustering method based on common Learning Subspaces structure and cluster oriental matrix |
CN110175631B (en) * | 2019-04-28 | 2022-08-30 | 南京邮电大学 | Multi-view clustering method based on common learning subspace structure and clustering indication matrix |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11501192B2 (en) | Systems and methods for Bayesian optimization using non-linear mapping of input | |
CN105740912B (en) | The recognition methods and system of low-rank image characteristics extraction based on nuclear norm regularization | |
Han et al. | Image annotation by input–output structural grouping sparsity | |
CN110348579A (en) | A kind of domain-adaptive migration feature method and system | |
Makinen et al. | The cosmic graph: Optimal information extraction from large-scale structure using catalogues | |
CN114841257B (en) | Small sample target detection method based on self-supervision comparison constraint | |
CN105760821A (en) | Classification and aggregation sparse representation face identification method based on nuclear space | |
CN111191698B (en) | Clustering method based on nonnegative matrix factorization and fuzzy C-means | |
CN109063757A (en) | It is diagonally indicated based on block and the multifarious multiple view Subspace clustering method of view | |
CN109993208A (en) | A kind of clustering processing method having noise image | |
CN110990775B (en) | Multi-view clustering method based on multi-manifold dual graph regularized non-negative matrix factorization | |
CN111191699A (en) | Multi-view clustering method based on non-negative matrix factorization and division adaptive fusion | |
CN110188825A (en) | Image clustering method, system, equipment and medium based on discrete multiple view cluster | |
CN108734199A (en) | High spectrum image robust classification method based on segmentation depth characteristic and low-rank representation | |
Wang et al. | Entropy regularization for unsupervised clustering with adaptive neighbors | |
CN104091038A (en) | Method for weighting multiple example studying features based on master space classifying criterion | |
CN109376787A (en) | Manifold learning network and computer visual image collection classification method based on it | |
Wang et al. | Particle swarm optimization for fuzzy c-means clustering | |
CN112509017A (en) | Remote sensing image change detection method based on learnable difference algorithm | |
CN111259917A (en) | Image feature extraction method based on local neighbor component analysis | |
CN110096976A (en) | Human behavior micro-Doppler classification method based on sparse migration network | |
CN106845462A (en) | The face identification method of feature and cluster is selected while induction based on triple | |
Chen et al. | Sparsity-regularized feature selection for multi-class remote sensing image classification | |
Yang et al. | PlantNet: transfer learning-based fine-grained network for high-throughput plants recognition | |
CN109409422A (en) | An a kind of step Spectral Clustering based on spectrum rotation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190301 |
|
RJ01 | Rejection of invention patent application after publication |