CN108388918B - Data feature selection method with structure retention characteristics - Google Patents

Data feature selection method with structure retention characteristics Download PDF

Info

Publication number
CN108388918B
CN108388918B CN201810167419.4A CN201810167419A CN108388918B CN 108388918 B CN108388918 B CN 108388918B CN 201810167419 A CN201810167419 A CN 201810167419A CN 108388918 B CN108388918 B CN 108388918B
Authority
CN
China
Prior art keywords
expression
data set
feature selection
original data
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810167419.4A
Other languages
Chinese (zh)
Other versions
CN108388918A (en
Inventor
李学龙
鲁全茂
董永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN201810167419.4A priority Critical patent/CN108388918B/en
Publication of CN108388918A publication Critical patent/CN108388918A/en
Application granted granted Critical
Publication of CN108388918B publication Critical patent/CN108388918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods

Abstract

The invention discloses a data feature selection method with a structure retention characteristic, which can obtain a more effective unsupervised feature selection algorithm, wherein the algorithm utilizes a self-expression model to model the feature selection problem, so that the noise problem caused by learning pseudo label data is avoided, the robustness of the algorithm is further improved by adding the structure retention characteristic, and a clustering result with higher precision is obtained. The method comprises the following implementation steps: (1) determining an original data set X, and constructing a self-expression model of the original data set X; (2) adding a local manifold structure maintenance constraint from the expression model; (3) constraining a reconstruction coefficient matrix W added with a local manifold structure keeping constraint to obtain a target function expression; (4) carrying out optimization solution on the target function expression; (5) and carrying out feature selection on the feature selection matrix obtained by solving.

Description

Data feature selection method with structure retention characteristics
Technical Field
The invention belongs to the technical field of information processing, and particularly relates to a data feature selection method with a structure retention characteristic.
Background
Feature selection is a very effective data analysis technology, is currently paid attention and researched by many scholars, has a good effect in many practical tasks, and is already applied to the fields of image processing, computer vision and the like, such as face clustering, handwritten character recognition and object classification. From the aspect of whether or not tag data is used, feature selection can be divided into three categories: supervised feature selection, semi-supervised feature selection, and unsupervised feature selection.
The supervised feature selection is realized by training data with labels and then carrying out experimental verification on a test sample, and a feature selection result with higher accuracy can be obtained because a large amount of prior information is used in the supervised feature selection method.
The semi-supervised features mainly utilize a small amount of label information and then are combined with data without labels to learn together to obtain a feature selection model. The unsupervised feature selection method has attracted attention of a great number of researchers and has been rapidly developed in recent years, considering that data in practical applications are untagged and the cost for calibrating the data is very expensive.
The current unsupervised methods are mainly divided into three main categories: a filtering based method, an encapsulation based method and an embedding based method. Because the embedding-based method can integrate the feature selection problem into the model reconstruction, the embedding-based method can better explore different characteristics of data, such as manifold structure retention, data formality retention and the like.
One is a filtering-based feature selection method, which typically uses some statistical properties to select important features. Representative of the work is that set forth in "X.He, D.Cai, P.Niyogi, Laplacian score for feature selection, in Neural Information Processing Systems, pp.507-514,2005" by He, Cai and Niyogi. This work is primarily to measure the importance of features by the local retention of each feature. The method comprises the steps of firstly calculating the distance between sample points, finding k neighbor data corresponding to each sample point, constructing a similarity matrix, then solving the similarity matrix according to the similarity matrix, constructing a corresponding Laplacian graph, finally solving the Laplacian score of each feature, wherein the obtained value represents the importance of the feature, and then selecting h most important features.
The other is a feature selection method based on packaging, which usually uses a prediction model or a learning method to select features, and evaluates the importance of the features according to a predetermined learning algorithm. Considering that clustering is an important research problem in unsupervised learning, some clustering-based criteria are used to examine feature selection and the results of clustering thereof. Representative work has been the method proposed by Dy and Brodley in "j.g.dy, c.e.brodley, Feature selection for unsupervised search, Journal of Machine Learning Research, vol.5, pp.845-889,2004," which packages the Feature selection problem into a clustering algorithm, and then uses two evaluation indices to screen candidate Feature subsets, which are the scattering separability and maximum likelihood estimates, respectively.
And thirdly, embedding a feature selection method based on embedding, wherein the method is mainly used for embedding an unsupervised feature selection problem into model reconstruction and then performing feature selection by using properties of data such as structure retention, similarity retention and the like. In view of the many problems of unsupervised learning that spectral analysis has been successfully applied, unsupervised feature selection methods based on spectral analysis have also been proposed and achieve better results. The method based on the spectrum analysis is mainly characterized in that a pseudo label matrix of data is learned by constructing a similarity matrix of the data, and then unsupervised feature selection is guided by the pseudo label matrix, so that how to obtain the high-quality pseudo label matrix is the core of the method based on the spectrum analysis. Representative work is the non-negative feature selection method proposed by Li, Yang, et al in "z.li, y.yang, j.liu, x.zhou, h.lu, Unsupervised feature selection using non-negative spectroscopy, in: Proceedings of Conference on intellectual Intelligence, 2012", which mainly comprises introducing a non-negative constraint on the basis of spectral analysis to reflect the separability information in the data, then using the separability information to learn the pseudo-tag matrix of the data, and guiding the subsequent feature selection process.
Although the filtering-based method is easy to implement, the method evaluates the features respectively, and usually ignores global information, so that a good feature selection result cannot be obtained on some tasks. Encapsulation-based methods tend to be time-complex and tend to cause overfitting, thereby reducing the results of feature selection. The embedding-based method usually needs to learn a class label matrix, but the real class label matrix is composed of discrete values and is difficult to solve, so that the current method approximates the discrete values through continuous values, thereby introducing noise and obtaining an unstable feature selection result.
Disclosure of Invention
The invention aims to provide a data characteristic selection method with structure retention property aiming at the defects of the prior method in the background art. The method mainly utilizes a self-expression model of an original data set X to model the feature selection problem, then adds a local manifold structure to keep constraint, and compared with the traditional embedding-based method, the method does not need to learn a pseudo label of data, thereby avoiding introducing noise and influencing the feature selection result. The method can effectively explore the similarity relation among the data in the original data set X, and well maintain the local manifold structure of the data, thereby obtaining a more accurate feature selection result.
The basic idea for realizing the invention is as follows:
(1) and constructing an objective function formula suitable for the unsupervised feature selection problem according to the self-expression model of the original data set. Each feature can be linearly expressed by other features, and the commonly used Frobenius norm is adopted to constrain the reconstruction error term, so that not only can the noise in the data be processed, but also the optimization problem can be conveniently solved.
(2) In consideration of the fact that the local structure is generally superior to the global feature, the method enables the reconstructed data to keep the structural relationship in the original space by constructing the local manifold structure keeping constraint, so that the robustness of the feature selection algorithm is improved.
(3) Since the object of the present invention is to perform feature selection, it is necessary to perform l on the feature selection matrix2,1And regular constraint, namely ensuring that the obtained feature selection matrix is row sparse, so that the feature importance can be described according to the feature selection matrix.
(4) And carrying out optimization solution on the target function expression. In consideration of the fact that the closed solution of the target function cannot be directly obtained, the method adopts the alternative iteration algorithm to solve the model to obtain the optimal feature selection matrix, and the convergence of the solving algorithm is well verified in the experimental part.
(5) And solving the obtained feature selection matrix, then performing descending order arrangement according to the summation result, and selecting the features corresponding to the previous h values as the final feature selection result.
The specific technical scheme of the invention is as follows:
the invention provides a data characteristic selection method with structure retention characteristics, which is characterized by comprising the following steps of:
step 1, determining an original data set X, and constructing a self-expression model of the original data set X; the original data set X is a COIL20 human face data set or an MNIST handwritten character data set or a TOX _171 biological data set;
x is N × d, where N is the number of data and d is the data feature dimension; n and d are both positive integers;
the specific construction method comprises the following steps:
for the ith feature of the original dataset X, a self-expression model was constructed:
Figure GDA0002421496430000051
wherein wjiTo express the coefficients, fiI features representing the original dataset X, | · non-pIs the p-norm, f, of the original data set XjJ features representing the original data set X;
the self-expression model of the original data set X is:
min||W||p,X=XW, (2)
wherein W ∈ Rd×dW is a reconstruction expression coefficient matrix;
considering that the original data set X in expression (2) usually contains noise, expression (2) is:
Figure GDA0002421496430000061
where E represents the noise term in the original data set X, expression (3) is equivalent to:
Figure GDA0002421496430000062
wherein α is a weight coefficient;
step 2, adding a local manifold structure to the expression (4) to keep constraint;
selecting any two data points X in a raw data set XmAnd xnIts corresponding weight can be expressed as:
Figure GDA0002421496430000063
combining the expressions (4) and (5) to obtain an expression (6):
Figure GDA0002421496430000064
to maintain the local manifold structure, data point xmAnd xnCorresponding reconstructed data WTxmAnd WTxn
WTxmTranspose of the mth reconstructed data point, W, representing XWTxnA transpose of the nth reconstructed data point representing XW;
step 3, constraining the reconstruction coefficient matrix W in the expression (6) to obtain a target function expression;
l is carried out on the reconstruction coefficient matrix W2,1Regular constraint, ensuring that the obtained reconstruction coefficient matrix W is row sparse, and the target function expression is as follows:
Figure GDA0002421496430000065
step 4, carrying out optimization solution on the target function expression;
considering the need to derive equation (7), the third term in equation (7) is simplified, specifically:
Figure GDA0002421496430000071
the objective function equation (7) can thus be converted into the following form:
Figure GDA0002421496430000072
wherein L isSD-S denotes the Laplacian matrix for S, considering that W is row sparse, while | | | Wi||2Possibly zero, so will | | wi||2Is written as
Figure GDA0002421496430000073
(ε is a positive number approaching 0), one can obtain:
Figure GDA0002421496430000074
order to
Figure GDA0002421496430000075
Figure GDA0002421496430000076
Deriving W and making the derivative zero yields:
Figure GDA0002421496430000077
wherein Q ∈ Rd×dIs a diagonal matrix in which each diagonal element Q isiiThe form of (A) is as follows:
Figure GDA0002421496430000078
fixing Q, the expression for W can be found as:
W=(βXTLSX+XTX+αQ)-1XTX. (13)
solving Q and W by using the equations (12) and (13), and judging the value in the equation (10)
Figure GDA0002421496430000081
Whether a convergence condition is reached; the convergence condition is
Figure GDA0002421496430000082
If it is
Figure GDA0002421496430000083
The convergence is considered to be reached and the final feature selection matrix W is output*(ii) a If it is
Figure GDA0002421496430000084
Then, considering that the convergence is not reached, the iterative solution of Q and W is continued by using the equations (12) and (13)Up to
Figure GDA0002421496430000085
It satisfies the convergence condition;
step 5, according to W*And selecting the characteristics.
For each feature i, solve for
Figure GDA0002421496430000086
And then sorting according to a descending order, selecting the features corresponding to the first h maximum values as the feature selection result of the last original data set X, and removing the rest corresponding features.
The invention has the beneficial effects that:
generally speaking, the invention avoids the noise introduced by learning a pseudo label matrix by exploring the similarity relation between data by using a self-expression model, and introduces the local manifold structure keeping constraint into the self-expression model, thereby improving the robustness of the algorithm.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a graph of the convergence test of the present invention on a COIL20 face data set;
FIG. 3 is a result of a convergence test on an MNIST handwritten character data set according to the present invention;
FIG. 4 is the result of the convergence test on TOX _171 biological data set according to the present invention;
Detailed Description
The steps performed by the present invention are described in further detail below with reference to the following figures:
referring to fig. 1, the steps implemented by the present invention are as follows:
step 1, determining an original data set X, and constructing a self-expression model of the original data set X; the original data set X is a COIL20 human face data set or an MNIST handwritten character data set or a TOX _171 biological data set;
x is N × d, where N is the number of data and d is the data feature dimension; n and d are both positive integers;
the specific construction method comprises the following steps:
for the ith feature of the original dataset X, a self-expression model was constructed:
Figure GDA0002421496430000091
wherein wjiTo express the coefficients, fiI features representing the original dataset X, | · non-pIs the p-norm, f, of the original data set XjJ features representing the original data set X;
the self-expression model of the original data set X is:
min||W||p,X=XW, (2)
wherein W ∈ Rd×dW is a reconstruction expression coefficient matrix;
considering that the original data set X in expression (2) usually contains noise, expression (2) can be modified to:
Figure GDA0002421496430000092
where E represents the noise term in the original data set X, expression (3) is equivalent to:
Figure GDA0002421496430000101
wherein α is a weight coefficient;
step 2, adding a local manifold structure to the expression (4) to keep constraint;
selecting any two data points X in a raw data set XmAnd xnIts corresponding weight can be expressed as:
Figure GDA0002421496430000102
to maintain the local manifold structure, data point xmAnd xnCorresponding reconstructed data WTxmAnd WTxnThe nearest neighbor relationship of the original data points should be maintained (since each row in XW represents a reconstructed data point, WTxmTranspose of the mth reconstructed data point, W, representing XWTxnA transpose of the nth reconstructed data point representing XW; ) Thus, expression (6) is obtained by combining expressions (4) and (5):
Figure GDA0002421496430000103
step 3, constraining the reconstruction coefficient matrix W in the expression (6) to obtain a target function expression;
l is carried out on the reconstruction coefficient matrix W2,1And (3) regular constraint, recording the reconstruction coefficient matrix at the moment as a feature selection matrix to ensure that the obtained reconstruction coefficient matrix W is row sparse, wherein the specific form is as follows:
Figure GDA0002421496430000104
step 4, carrying out optimization solution on the target function expression;
considering that equation (7) needs to be differentiated, but the solving process of the third term in equation (7) is complex, and in order to facilitate the derivation of the problem, the third term in equation (7) may be simplified, specifically:
Figure GDA0002421496430000111
the objective function equation (7) can thus be converted into the following form:
Figure GDA0002421496430000112
wherein L isSD-S denotes the Laplacian matrix for S, considering that W is row sparse, while | | | Wi||2Possibly zero, becauseThis will | | wi||2Is written as
Figure GDA0002421496430000113
(ε is a positive number approaching 0), one can obtain:
Figure GDA0002421496430000114
order to
Figure GDA0002421496430000115
Figure GDA0002421496430000116
Deriving W and making the derivative zero yields:
Figure GDA0002421496430000117
wherein Q ∈ Rd×dIs a diagonal matrix in which each diagonal element Q isiiThe form of (A) is as follows:
Figure GDA0002421496430000118
fixing Q, the expression for W can be found as:
W=(βXTLSX+XTX+αQ)-1XTX. (13)
solving Q and W by using the equations (12) and (13), and judging the value in the equation (10)
Figure GDA0002421496430000121
Whether a convergence condition is reached; the convergence condition is
Figure GDA0002421496430000122
If it is
Figure GDA0002421496430000123
Then it is considered to be reachedConverging and outputting the final feature selection matrix W*(ii) a If it is
Figure GDA0002421496430000124
Then the convergence is not considered to be reached, and the iterative solution of Q and W using equations (12) and (13) is continued until the convergence is reached
Figure GDA0002421496430000125
It satisfies the convergence condition;
step 5, according to W*And selecting the characteristics.
For each feature i, solve for
Figure GDA0002421496430000126
And then sorting according to a descending order, selecting the features corresponding to the first h maximum values as a final feature selection result, and removing the rest corresponding features.
The effects of the present invention can be further explained by the following simulation experiments.
1. Simulation conditions
The invention uses MATLAB software to simulate the central processing unit of Intel (R) Core i 3-21303.40 GHZ and the memory 16G, WINDOWS 7 operating system.
Three data sets, namely a COIL20 face data set, an MNIST handwritten character data set and a TOX _171 biological data set, are adopted in the experiment, and for each data set, multiple experiments are carried out to calculate the mean value and the variance of the data set.
2. Emulated content
The method of the invention is used for clustering analysis of data according to the following steps:
in order to show the effectiveness of the algorithm, six unsupervised feature selection algorithms are selected for comparison, namely a Laplacian Score (LS), a multiclass feature selection algorithm (MCFS), a divisible feature selection algorithm (UDFS), a non-negative feature selection algorithm (NDFS), a Lupont feature selection algorithm (RUFS) and an embedded feature selection algorithm (EUFS). The convergence results of the method of the present invention are shown in fig. 2, fig. 3 and fig. 4, and it can be seen from the convergence result graphs that when the number of iterations is greater than 10, the method has already become stable, so that the convergence of the method is verified, and a guarantee is provided for the stability of the method.
Secondly, the clustering precision (AC) solved by the method is compared with the values obtained by other six comparison methods, and the result is shown in table 1, so that the method has the best effect on different data sets, and the effectiveness of the method is verified.
TABLE 1 clustering precision values of different feature selection algorithms on COIL20, MNIST and TOX _171 (AC% + -std)
Figure GDA0002421496430000131

Claims (1)

1. A method for selecting data features having structure-preserving characteristics, comprising the steps of:
step 1, determining an original data set X, and constructing a self-expression model of the original data set X; wherein, the original data set X is a COIL20 human face data set or an MNIST handwritten character data set or a TOX _171 biological data set;
x is N × d, where N is the number of data and d is the data feature dimension; n and d are both positive integers;
the specific construction method comprises the following steps:
for the ith feature of the original dataset X, a self-expression model was constructed:
Figure FDA0002392839740000011
wherein wjiTo express the coefficients, fiI features representing the original dataset X, | · non-pIs the p-norm, f, of the original data set XjJ features representing the original data set X;
the self-expression model of the original data set X is:
min||W||p,X=XW, (2)
wherein W ∈ Rd×dW is a reconstruction expression coefficient matrix;
considering that the original data set X in expression (2) usually contains noise, expression (2) is:
Figure FDA0002392839740000012
where E represents the noise term in the original data set X, expression (3) is equivalent to:
Figure FDA0002392839740000021
wherein α is a weight coefficient;
step 2, adding a local manifold structure to the expression (4) to keep constraint;
selecting any two data points X in a raw data set XmAnd xnIts corresponding weight can be expressed as:
Figure FDA0002392839740000022
combining the expressions (4) and (5) to obtain an expression (6):
Figure FDA0002392839740000023
to maintain the local manifold structure, data point xmAnd xnCorresponding reconstructed data WTxmAnd WTxn
WTxmTranspose of the mth reconstructed data point, W, representing XWTxnA transpose of the nth reconstructed data point representing XW;
step 3, constraining the reconstruction coefficient matrix W in the expression (6) to obtain a target function expression;
l is carried out on the reconstruction coefficient matrix W2,1Regular constraint, ensuring that the obtained reconstruction coefficient matrix W is row sparse, and the target function expression is as follows:
Figure FDA0002392839740000024
step 4, carrying out optimization solution on the target function expression;
considering the need to derive equation (7), the third term in equation (7) is simplified, specifically:
Figure FDA0002392839740000031
the objective function equation (7) can thus be converted into the following form:
Figure FDA0002392839740000032
wherein L isSD-S denotes the Laplacian matrix for S, considering that W is row sparse, while | | | Wi||2Possibly zero, so will | | wi||2Is written as
Figure FDA0002392839740000033
ε is a positive number approaching 0, which can result in:
Figure FDA0002392839740000034
order to
Figure FDA0002392839740000035
Figure FDA0002392839740000036
Deriving W and making the derivative zero yields:
Figure FDA0002392839740000037
wherein Q ∈ Rd×dIs a diagonal matrix in which each diagonal isElement QiiThe form of (A) is as follows:
Figure FDA0002392839740000038
fixing Q, the expression for W can be found as:
W=(βXTLSX+XTX+αQ)-1XTX. (13)
solving Q and W by using the equations (12) and (13), and judging the value in the equation (10)
Figure FDA0002392839740000041
Whether a convergence condition is reached; the convergence condition is
Figure FDA0002392839740000042
If it is
Figure FDA0002392839740000043
The convergence is considered to be reached and the final feature selection matrix W is output*(ii) a If it is
Figure FDA0002392839740000044
Then the convergence is not considered to be reached, and the iterative solution of Q and W using equations (12) and (13) is continued until the convergence is reached
Figure FDA0002392839740000045
It satisfies the convergence condition;
step 5, according to W*Selecting the characteristics;
for each feature i, solve for
Figure FDA0002392839740000046
And then sorting according to a descending order, selecting the features corresponding to the first h maximum values as the feature selection result of the last original data set X, and removing the rest corresponding features.
CN201810167419.4A 2018-02-28 2018-02-28 Data feature selection method with structure retention characteristics Active CN108388918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810167419.4A CN108388918B (en) 2018-02-28 2018-02-28 Data feature selection method with structure retention characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810167419.4A CN108388918B (en) 2018-02-28 2018-02-28 Data feature selection method with structure retention characteristics

Publications (2)

Publication Number Publication Date
CN108388918A CN108388918A (en) 2018-08-10
CN108388918B true CN108388918B (en) 2020-06-12

Family

ID=63069094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810167419.4A Active CN108388918B (en) 2018-02-28 2018-02-28 Data feature selection method with structure retention characteristics

Country Status (1)

Country Link
CN (1) CN108388918B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800780B (en) * 2018-12-05 2021-04-27 天津大学 Domain self-adaptive remote sensing image classification algorithm based on unsupervised manifold alignment
CN111783816A (en) * 2020-02-27 2020-10-16 北京沃东天骏信息技术有限公司 Feature selection method and device, multimedia and network data dimension reduction method and equipment
CN112231933B (en) * 2020-11-06 2023-07-28 中国人民解放军国防科技大学 Feature selection method for radar electromagnetic interference effect analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369316A (en) * 2008-07-09 2009-02-18 东华大学 Image characteristics extraction method based on global and local structure amalgamation
CN102663392A (en) * 2012-02-29 2012-09-12 浙江大学 Image feature extraction method based on Laplace operator
CN103034869A (en) * 2012-12-05 2013-04-10 湖州师范学院 Part maintaining projection method of adjacent field self-adaption
CN103605889A (en) * 2013-11-13 2014-02-26 浙江工业大学 Data dimension reduction method based on data global-local structure preserving projections
CN107220656A (en) * 2017-04-17 2017-09-29 西北大学 A kind of multiple labeling data classification method based on self-adaptive features dimensionality reduction
CN107316050A (en) * 2017-05-19 2017-11-03 中国科学院西安光学精密机械研究所 Subspace based on Cauchy's loss function is from expression model clustering method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7088860B2 (en) * 2001-03-28 2006-08-08 Canon Kabushiki Kaisha Dynamically reconfigurable signal processing circuit, pattern recognition apparatus, and image processing apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369316A (en) * 2008-07-09 2009-02-18 东华大学 Image characteristics extraction method based on global and local structure amalgamation
CN102663392A (en) * 2012-02-29 2012-09-12 浙江大学 Image feature extraction method based on Laplace operator
CN103034869A (en) * 2012-12-05 2013-04-10 湖州师范学院 Part maintaining projection method of adjacent field self-adaption
CN103605889A (en) * 2013-11-13 2014-02-26 浙江工业大学 Data dimension reduction method based on data global-local structure preserving projections
CN107220656A (en) * 2017-04-17 2017-09-29 西北大学 A kind of multiple labeling data classification method based on self-adaptive features dimensionality reduction
CN107316050A (en) * 2017-05-19 2017-11-03 中国科学院西安光学精密机械研究所 Subspace based on Cauchy's loss function is from expression model clustering method

Also Published As

Publication number Publication date
CN108388918A (en) 2018-08-10

Similar Documents

Publication Publication Date Title
Li et al. Towards faster training of global covariance pooling networks by iterative matrix square root normalization
CN107122809B (en) Neural network feature learning method based on image self-coding
Quattoni et al. An efficient projection for l 1,∞ regularization
Kuo et al. Green learning: Introduction, examples and outlook
CN111461157B (en) Self-learning-based cross-modal Hash retrieval method
CN106991355B (en) Face recognition method of analytic dictionary learning model based on topology maintenance
CN111898703B (en) Multi-label video classification method, model training method, device and medium
CN110889865B (en) Video target tracking method based on local weighted sparse feature selection
CN108388918B (en) Data feature selection method with structure retention characteristics
CN107451545A (en) The face identification method of Non-negative Matrix Factorization is differentiated based on multichannel under soft label
CN112529638B (en) Service demand dynamic prediction method and system based on user classification and deep learning
CN110705636A (en) Image classification method based on multi-sample dictionary learning and local constraint coding
CN112270345A (en) Clustering algorithm based on self-supervision dictionary learning
CN113011243A (en) Facial expression analysis method based on capsule network
Lin et al. A deep clustering algorithm based on gaussian mixture model
Yang et al. Attention-based dynamic alignment and dynamic distribution adaptation for remote sensing cross-domain scene classification
CN108121964B (en) Matrix-based joint sparse local preserving projection face recognition method
Zhang et al. A novel deep LeNet-5 convolutional neural network model for image recognition
CN110378356B (en) Fine-grained image identification method based on multi-target Lagrangian regularization
Ye et al. TS2V: A transformer-based Siamese network for representation learning of univariate time-series data
CN106529601A (en) Image classification prediction method based on multi-task learning in sparse subspace
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network
CN112069978B (en) Face recognition method based on mutual information and dictionary learning
CN114036947A (en) Small sample text classification method and system for semi-supervised learning
Zhang et al. Fully Kernected Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant