CN109961088B - Unsupervised non-linear self-adaptive manifold learning method - Google Patents

Unsupervised non-linear self-adaptive manifold learning method Download PDF

Info

Publication number
CN109961088B
CN109961088B CN201910114146.1A CN201910114146A CN109961088B CN 109961088 B CN109961088 B CN 109961088B CN 201910114146 A CN201910114146 A CN 201910114146A CN 109961088 B CN109961088 B CN 109961088B
Authority
CN
China
Prior art keywords
algorithm
processor
value
adaptive
manifold learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910114146.1A
Other languages
Chinese (zh)
Other versions
CN109961088A (en
Inventor
王邦军
高家俊
李凡长
张莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201910114146.1A priority Critical patent/CN109961088B/en
Publication of CN109961088A publication Critical patent/CN109961088A/en
Application granted granted Critical
Publication of CN109961088B publication Critical patent/CN109961088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unsupervised nonlinear adaptive manifold learning method which comprises the steps of expanding adjacent points, combining the adjacent points, defining an objective function according to the above, wherein α is a balance parameter which can flexibly adjust the balance between two considered factors in an algorithm by adjusting α, obviously, when the α value is a relatively small value, the formula focuses more on global pairwise distance errors, and when α takes a large value, the formula considers more local topological relations, xj∈MNN(xi) Meaning that after using the adaptive neighbor method above, xjIs xiA neighboring point of (a); the reconstruction weight matrix W is obtained by optimizing the following problem. The invention has the beneficial effects that: the algorithm ingeniously combines the advantages of the LLE algorithm and the isomap algorithm, simultaneously considers local and global characteristics, and can comprehensively and effectively extract the characteristics of high-dimensional data.

Description

Unsupervised non-linear self-adaptive manifold learning method
Technical Field
The invention relates to the field of adaptive manifold learning methods, in particular to an unsupervised nonlinear adaptive manifold learning method.
Background
In machine learning, it is very important to obtain a compact representation of high-dimensional data in a low-dimensional space by an appropriate feature extraction method. This is known as "dimensionality disaster" because as the data dimensionality increases, the computational effort is exponentially multiplied. Therefore, for machine learning, it is very important to obtain a compact representation of high-dimensional data in a low-dimensional space by an appropriate feature extraction method. There are many methods for obtaining new features from original features. They are generally classified into feature selection and feature extraction. While the latter is more widely used.
In recent years, researchers have proposed many new feature extraction methods. These methods can be generally classified into three categories: supervised, semi-supervised and unsupervised methods. The supervised feature extraction method is to extract features by using data containing label information. Supervised methods rely on labeled samples to improve classification accuracy. Representative methods are Linear Discriminant Analysis (LDA), supervised isometric feature mapping (S-ISOMAP), Supervised Local Linear Embedding (SLLE) and Supervised Local Preserving Projection (SLPP). Only a portion of these data used by the semi-supervised method has label information, and the rest is unmarked data.
However, in the real world, correct and sufficient label information is not readily available, so an unsupervised learning method using only unlabeled data is very important for us to process data. Principal Component Analysis (PCA) is a very classical unsupervised feature extraction method, but because it is a linear method in nature, it does not work as well as expected when the data we want to process is manifold. Later researchers have proposed many non-linear manifold learning algorithms. E.g., isometrical feature mapping (Isomap), Locally Linear Embedding (LLE), laplacian feature mapping (LE), Locally Preserved Projection (LPP), Unsupervised Discriminant Projection (UDP), etc
The two most classical unsupervised nonlinear dimension reduction algorithms are the Isomap algorithm and the LLE algorithm. The former mainly considers maintaining global pairwise distances, and the latter mainly considers maintaining local topologies.
Isomap is one of the most classical unsupervised manifold learning. The algorithm was proposed by Tenenbaum et al in the science journal of 2000. Based on a multidimensional scaling analysis (MDS) algorithm, Isomap introduces geodesic distances as a measure of the distance between two points. The core goal of the algorithm is to find an optimal subspace that minimizes the geodetic distance error between pairs of points. Presenting a data set
Figure BDA0001968934400000021
Where N is the data volume of the data set and D is the original dimensionality of the data set.
Figure BDA0001968934400000022
Representing a set of N points in a reduced D-dimensional space (D ≦ D). The algorithm flow can be divided into threeThe method comprises the following steps:
step 1: the nearest neighbors of each point are determined using a K Nearest Neighbor (KNN) classification algorithm.
Step 2: to determine the nearest neighbor's successor, we need to calculate the geodesic distance. We first calculate the euclidean distance between points. If the two points are not nearest neighbors, we set the distance between the two points to + ∞, and then compute the shortest path using Dijkstra's algorithm or Floeid's algorithm. Thereby obtaining the geodesic distance between the points. d (x)i,xj) Represents point xiAnd point xjEuclidean distance between, dG(xi,xj) Represents point xiAnd point xjThe geodesic distance of (c).
And step 3: a low-dimensional representation of the data is obtained by minimizing a cost function, as follows:
Figure BDA0001968934400000023
we can similarly solve by the solution of MDS. So that H ═ I- (1/N) eeTWhere I is an identity matrix of N × N and e is a vector of all values 1, equation (1) can be written as
Figure BDA0001968934400000024
Wherein R is h/2, Q is a moiety which can be represented by formula (i)
Figure BDA0001968934400000025
The resulting matrix. And | | is the Frobenius norm. We can calculate
Figure BDA0001968934400000026
Obtaining a value of Y, wherein12,...,λmAnd V1,V2,...,VmRespectively representing the largest m eigenvalues of R and their corresponding eigenvectors.
LLE is one of the best known non-linear dimension reduction algorithms. The idea of LLE is to assume that high dimensional data is locally linear and that a sample can be linearly represented by several neighboring samples. In other words, the algorithm goal of LLE is to maintain a local topology of the data. The LLE algorithm flow can be summarized into three steps:
step 1: the nearest neighbors of each point are determined using a K Nearest Neighbor (KNN) classification algorithm.
Step 2: the local reconstruction weight matrix is calculated by minimizing a cost function:
Figure BDA0001968934400000031
and step 3: optimal low-dimensional embedding is obtained by minimizing a cost function:
Figure BDA0001968934400000032
the above formula can be written as f (y) ∑i,jMij(YiYj) Where M ═ I (I-W)T(I-W). Finally, we perform a matrix decomposition method on the matrix M to obtain the second to (d +1) th eigenvalues corresponding to the smallest thereof and the corresponding eigenvectors. These feature vectors constitute Y.
The traditional technology has the following technical problems:
we know that Isomap aims to preserve global pairwise distances, while LLE wants to preserve neighborhood relational structures. Obviously, the former only considers the whole and loses the local structural features. The latter concerns the local area and is totally lacking in consideration. Furthermore, selecting the nearest neighbor in the LLE algorithm has been a pending problem. Without a priori knowledge, it is difficult to artificially select an appropriate value. If the value of K is too large, the smoothness of the entire high-dimensional data is destroyed and the small scale geometry on the original data cannot be preserved. Also, it is not reasonable to use the same K to indiscriminately select neighbors for each data point. This method only considers the topology of data points that are close to each other, ignoring important structural features of points that are far apart. Furthermore, large-scale neighborhood structures contain a large amount of unimportant structural information, which will increase the amount of unnecessary computation, even destroying some of its own real local topology by selecting some wrong neighbors.
Disclosure of Invention
In order to allow the algorithm to consider both global and local features of the data, rather than focus solely on a single feature, we propose an unsupervised nonlinear adaptive manifold learning method (UNAML) that combines global and local information. The basic idea of UNAML is to combine the advantages of Isomap and LLE to make feature extraction more comprehensive. To solve the neighbor selection problem, we use an adaptive method to compute the neighbor points. The method solves the problems to a certain extent and is helpful for better discovering and maintaining important structural information.
In order to solve the above technical problem, the present invention provides an unsupervised nonlinear adaptive manifold learning method, including:
expanding the neighboring points;
combining adjacent points;
according to the above, an objective function is defined:
Figure BDA0001968934400000044
Figure BDA0001968934400000041
where α is a trade-off parameter that can flexibly adjust the balance between these two considerations in the algorithm by adjusting α, it is clear that the formula will focus more on the global pairwise distance error when the α value is a relatively small value and the formula considers more of the local topological relationships when the α value is large, xj∈MNN(xi) Meaning that after using the adaptive neighbor method above, xjIs xiA neighboring point of (a); the reconstruction weight matrix W is obtained by optimizing the following problem:
Figure BDA0001968934400000042
if J is definedGL=∑i,j(d(yi,yj)-dG(yi,yj))2
Figure BDA0001968934400000043
Then equation 4 can be written as:
Figure BDA0001968934400000051
then J is put inGLAnd JNNWritten as follows:
JGL=trace(YTTY) (7)
JNN=trace(YTMY) (8)
T=-HQH/2,H=I-(1/N)eeTwhere I is an identity matrix of N × N, e is a vector with all values 1, and M ═ WT(I-W) and Q may pass
Figure BDA0001968934400000052
Calculating to obtain;
then, equation 6 can be written as:
Figure BDA0001968934400000054
σ(Y)=trace(YT(T+αM)Y)
define A ═ T + α M, then through calculation
Figure BDA0001968934400000053
Obtaining a value of Y, wherein12,...,λmAnd V1,V2,...,VmRespectively representing the largest m eigenvalues of a and the corresponding eigenvectors.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods.
A processor for running a program, wherein the program when running performs any of the methods.
The invention has the beneficial effects that:
the algorithm ingeniously combines the advantages of the LLE algorithm and the isomap algorithm, simultaneously considers local and global characteristics, and can comprehensively and effectively extract the characteristics of high-dimensional data. In addition, the self-adaptive neighbor selection method introduced by the method also effectively solves the problems that the artificial K value selection is difficult and the neighbor quantity is fixed and is too rigid in the traditional KNN algorithm.
Drawings
Fig. 1(a) is a schematic diagram of a first experimental result in the unsupervised nonlinear adaptive manifold learning method according to the present invention.
FIG. 1(b) is a second schematic diagram of the first experimental result of the unsupervised nonlinear adaptive manifold learning method according to the present invention.
FIG. 1(c) is a third schematic diagram of the first experimental result of the unsupervised nonlinear adaptive manifold learning method according to the present invention.
Fig. 2(a) is a schematic diagram of a first experimental result in the unsupervised nonlinear adaptive manifold learning method according to the present invention.
FIG. 2(b) is a second schematic diagram of the first experimental result of the unsupervised nonlinear adaptive manifold learning method according to the present invention.
Fig. 2(c) is a third schematic diagram of the first experimental result of the unsupervised nonlinear adaptive manifold learning method according to the present invention.
Fig. 3(a) is a schematic diagram of a first experimental result in the unsupervised nonlinear adaptive manifold learning method according to the present invention.
FIG. 3(b) is a second schematic diagram of the first experimental result of the unsupervised nonlinear adaptive manifold learning method according to the present invention.
FIG. 3(c) is a third schematic diagram of the first experimental result of the unsupervised nonlinear adaptive manifold learning method according to the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
UNAML considers both global and local features, with the goal of maintaining a local topological relationship between the global feature structure and the neighboring data. The goal of our formula is to minimize both global pairwise data distance errors and local structure errors by fusing Isomap and LLE. We use an adaptive nearest neighbor selection method to complete the construction of the adjacency graph rather than using the common KNN algorithm. This approach has been used in adaptive local preserving projection (MLPP), which is one of the extended versions of LPP. This adaptive nearest neighbor selection method uses a "graph growth" strategy to construct the adjacency graph. In this approach we select only the nearest point in order to simplify the problem of selecting the number of neighbors. This method can be summarized in two steps:
in the first step, we need to extend the neighbors. For example, x1Nearest neighbor is x2But x2Nearest neighbor is x3. In this case, x1And x3Are all x2The nearest neighbors of.
The second step is to merge neighbors. For example, in step 1, x1And x3Are also considered to be neighbors of each other. In this way we do not need to select parameters but can obtain an adjacency graph with each point having an unfixed number of neighboring points.
From the above, we now define our objective function:
Figure BDA0001968934400000072
Figure BDA0001968934400000071
it is clear that this formula will focus more on global pairwise distance errors when the value of α is a relatively small value, and will consider more local topological relationships when α takes a larger valuej∈MNN(xi) Meaning that after using the adaptive neighbor method above, xjIs xiThe neighbors of (2). Similar to LLE, the reconstruction weight matrix W can be obtained by optimizing the following problem:
Figure BDA0001968934400000081
if we define JGL=∑i,j(d(yi,yj)-dG(yi,yj))2
Figure BDA0001968934400000082
Then equation 4 can be written as:
Figure BDA0001968934400000083
then we will JGLAnd JNNWritten as follows:
JGL=trace(YTTY) (7)
JNN=trace(YTMY) (8)
T=-HQH/2,H=I-(1/N)eeTwhere I is an identity matrix of N × N, e is a vector with all values 1, and M ═ WT(I-W) and Q may pass
Figure BDA0001968934400000084
And (4) calculating.
Then, equation 6 can be written as:
Figure BDA0001968934400000086
σ(Y)=trace(YT(T+αM)Y)
we define a as T + α M, then we can calculate
Figure BDA0001968934400000085
Obtaining a value of Y, wherein12,...,λmAnd V1,V2,...,VmRespectively representing the largest m eigenvalues of a and the corresponding eigenvectors.
A specific application scenario of the present invention is described below:
our experiments used these datasets in total, including three face datasets (the YaleB database, the ORL database, and the AR database), an object dataset (the COIL20 database), and a handwriting dataset (the Hand _ draw _ digit database). The details of these data sets are shown in table 1. We compare our method to the following algorithms under the same conditions, including PCA, UDP, Isomap, LLE, and LPP. It is worth mentioning that PCA, UDP and LPP do not adjust parameters. In testing Isomap, the nearest neighbor number K is the same as K used in our method. The nearest neighbor K in LLE is set to 7.
We compare UNAML with classification results of other algorithms on the YaleB database, AR database and ORL database. We use Accuracy (AC) as a criterion.
Before the experiment began, we randomly selected a fixed number of samples from each class of each database to form a training set. In the experiment, we used a 1NN classifier based on euclidean distance. Each experiment was performed 50 times so that we could get an average of 50 results, considering that the different training sets formed would lead to huge differences in results. The results of the experiments are shown in FIGS. 1(a) to 3 (c). We give tables 2 to 4 showing the average and highest accuracy.
By means of them, we can find: (1) with the reduction of dimensionality and the increase of training data, the classification accuracy generally improves. (2) Our approach performed better in almost all cases than the others, especially when the dimensionality was small (3) on the Yaleb dataset, Isomap had better performance, followed by LLE and LPP, while PCA and UDP performed poorly. In the AR dataset, LLE performs worse than all other algorithms.
In clustering experiments, we used the COIL20 database and the Hand draw digit database. We compared our results with the clustering results of PCA, UDP, Isomap, LLE and LPP by Accuracy (AC) and Normalized Mutual Information (NMI). We obtained AC and NMI values between 0 and 1. The higher the value, the better the clustering effect.
In the clustering experiment, we first choose values from 2 to 9 for C then we obtain d-dimensional embedding Y of the data through various algorithms (where d ═ C +1) each time we take C class data from d-dimensional embedding Y to form a new subset of Y, then we perform K-means clustering on the new subset. Since the results of such clustering experiments are easily affected by the initialization of the selected subsets and the clustering centroids, 2 to 9 each, we extracted 30 subsets and K-means clustered the subsets 30 times to mitigate the above-mentioned effects. Specific comparison results are shown in tables 5, 6, 7 and 8.
We can conclude from this: (1) in general, the larger the value of C, the more accurate our clustering results. (2) The clustering experiment results of the COIN20 database were much better than those on the Hand _ drain _ digit dataset. (3) The clustering results of our method are almost always better than other algorithms on both datasets.
TABLE 1
Figure BDA0001968934400000101
TABLE 2
Figure BDA0001968934400000111
TABLE 3
Figure BDA0001968934400000121
TABLE 4
Figure BDA0001968934400000122
TABLE 5
Figure BDA0001968934400000123
TABLE 6
Figure BDA0001968934400000131
TABLE 7
Figure BDA0001968934400000132
TABLE 8
Figure BDA0001968934400000141
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (4)

1. An unsupervised nonlinear adaptive manifold learning method, comprising:
expanding the neighboring points;
combining adjacent points;
according to the above, an objective function is defined:
Figure FDA0001968934380000011
Figure FDA0001968934380000012
where α is a trade-off parameter that can flexibly adjust the balance between these two considerations in the algorithm by adjusting α, it is clear that the formula will focus more on the global pairwise distance error when the α value is a relatively small value and the formula considers more of the local topological relationships when the α value is large, xj∈MNN(xi) Meaning that after using the adaptive neighbor method above, xjIs xiA neighboring point of (a); the reconstruction weight matrix W is obtained by optimizing the following problem:
Figure FDA0001968934380000013
if J is definedGL=∑i,j(d(yi,yj)-dG(yi,yj))2
Figure FDA0001968934380000014
Then equation 4 can be written as:
Figure FDA0001968934380000015
then J is put inGLAnd JNNWritten as follows:
JGL=trace(YTTY) (7)
JNN=trace(YTMY) (8)
T=-HQH/2,H=I-(1/N)eeTwhere I is an identity matrix of N × N, e is a vector with all values 1, and M ═ WT(I-W) and Q may pass
Figure FDA0001968934380000021
Calculating to obtain;
then, equation 6 can be written as:
Figure FDA0001968934380000022
σ(Y)=trace(YT(T+αM)Y)
define A ═ T + α M, then through calculation
Figure FDA0001968934380000023
Obtaining a value of Y, wherein12,...,λmAnd V1,V2,...,VmRespectively representing the largest m eigenvalues of a and the corresponding eigenvectors.
2. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of claim 1 are performed when the program is executed by the processor.
3. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 1.
4. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of claim 1.
CN201910114146.1A 2019-02-13 2019-02-13 Unsupervised non-linear self-adaptive manifold learning method Active CN109961088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910114146.1A CN109961088B (en) 2019-02-13 2019-02-13 Unsupervised non-linear self-adaptive manifold learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910114146.1A CN109961088B (en) 2019-02-13 2019-02-13 Unsupervised non-linear self-adaptive manifold learning method

Publications (2)

Publication Number Publication Date
CN109961088A CN109961088A (en) 2019-07-02
CN109961088B true CN109961088B (en) 2020-09-29

Family

ID=67023680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910114146.1A Active CN109961088B (en) 2019-02-13 2019-02-13 Unsupervised non-linear self-adaptive manifold learning method

Country Status (1)

Country Link
CN (1) CN109961088B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101750210A (en) * 2009-12-24 2010-06-23 重庆大学 Fault diagnosis method based on OLPP feature reduction
CN102867191A (en) * 2012-09-04 2013-01-09 广东群兴玩具股份有限公司 Dimension reducing method based on manifold sub-space study
CN104408466A (en) * 2014-11-17 2015-03-11 中国地质大学(武汉) Semi-supervision and classification method for hyper-spectral remote sensing images based on local stream type learning composition
CN107563445A (en) * 2017-09-06 2018-01-09 苏州大学 A kind of method and apparatus of the extraction characteristics of image based on semi-supervised learning
CN108052965A (en) * 2017-12-01 2018-05-18 杭州平治信息技术股份有限公司 It is a kind of based on the feature selection approach organized sparse specification and locally learnt
CN109241813A (en) * 2017-10-17 2019-01-18 南京工程学院 The sparse holding embedding grammar of differentiation for unconstrained recognition of face

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184349A (en) * 2011-04-29 2011-09-14 河海大学 System and method for clustering gene expression data based on manifold learning
US20160132754A1 (en) * 2012-05-25 2016-05-12 The Johns Hopkins University Integrated real-time tracking system for normal and anomaly tracking and the methods therefor
WO2014152929A1 (en) * 2013-03-14 2014-09-25 Arizona Board Of Regents, A Body Corporate Of The State Of Arizona For And On Behalf Of Arizona State University Measuring glomerular number from kidney mri images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101750210A (en) * 2009-12-24 2010-06-23 重庆大学 Fault diagnosis method based on OLPP feature reduction
CN102867191A (en) * 2012-09-04 2013-01-09 广东群兴玩具股份有限公司 Dimension reducing method based on manifold sub-space study
CN104408466A (en) * 2014-11-17 2015-03-11 中国地质大学(武汉) Semi-supervision and classification method for hyper-spectral remote sensing images based on local stream type learning composition
CN107563445A (en) * 2017-09-06 2018-01-09 苏州大学 A kind of method and apparatus of the extraction characteristics of image based on semi-supervised learning
CN109241813A (en) * 2017-10-17 2019-01-18 南京工程学院 The sparse holding embedding grammar of differentiation for unconstrained recognition of face
CN108052965A (en) * 2017-12-01 2018-05-18 杭州平治信息技术股份有限公司 It is a kind of based on the feature selection approach organized sparse specification and locally learnt

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Unsupervised manifold learning through reciprocal kNN graph and connected components for image retrieval tasks;D.C.G.Pedronette 等;《Pattern Recognition》;20181230;第75卷;第161-174页 *
以决策为导向的信息系统需求建模研究;王邦军;《苏州大学学报(自然科学版)》;20060131(第1期);第1-2页 *

Also Published As

Publication number Publication date
CN109961088A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
Ghodsi Dimensionality reduction a short tutorial
Tang et al. Linear dimensionality reduction using relevance weighted LDA
Zhang et al. A survey on concept factorization: From shallow to deep representation learning
De la Torre et al. Multimodal oriented discriminant analysis
Rozza et al. Novel Fisher discriminant classifiers
Zhang et al. An efficient framework for unsupervised feature selection
Wang et al. Embedding metric learning into an extreme learning machine for scene recognition
He et al. Graph-based semi-supervised learning as a generative model
Gao et al. Unsupervised nonlinear adaptive manifold learning for global and local information
Mahapatra et al. S-isomap++: Multi manifold learning from streaming data
Kurasova et al. Quality of quantization and visualization of vectors obtained by neural gas and self-organizing map
Kaski et al. Principle of learning metrics for exploratory data analysis
Li et al. Dimensionality reduction with sparse locality for principal component analysis
Law Clustering, dimensionality reduction, and side information
CN109961088B (en) Unsupervised non-linear self-adaptive manifold learning method
CN111027514B (en) Elastic retention projection method based on matrix index and application thereof
Yang et al. Robust landmark graph-based clustering for high-dimensional data
Liu et al. An improved spectral clustering algorithm based on local neighbors in kernel space
Zaidi et al. A gradient-based metric learning algorithm for k-nn classifiers
Rashmi et al. Optimal landmark point selection using clustering for manifold modeling and data classification
Murthy et al. Moments discriminant analysis for supervised dimensionality reduction
Banijamali et al. Semi-supervised representation learning based on probabilistic labeling
Jiang et al. Variable Selection based on a Two-stage Projection Pursuit Algorithm.
Feng et al. The framework of learnable kernel function and its application to dictionary learning of SPD data
Mehta Generalized multi-manifold graph ensemble embedding for multi-view dimensionality reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant