CN104200077A - Embedded type attribute selection method based on subspace learning and application of embedded type attribute selection method based on subspace learning - Google Patents

Embedded type attribute selection method based on subspace learning and application of embedded type attribute selection method based on subspace learning Download PDF

Info

Publication number
CN104200077A
CN104200077A CN201410416253.7A CN201410416253A CN104200077A CN 104200077 A CN104200077 A CN 104200077A CN 201410416253 A CN201410416253 A CN 201410416253A CN 104200077 A CN104200077 A CN 104200077A
Authority
CN
China
Prior art keywords
attribute
selection method
attribute selection
lambda
eta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410416253.7A
Other languages
Chinese (zh)
Inventor
朱永华
宗鸣
程德波
邓振云
孙可
朱晓峰
张师超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN201410416253.7A priority Critical patent/CN104200077A/en
Publication of CN104200077A publication Critical patent/CN104200077A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an embedded type attribute selection method based on subspace learning and application of the embedded type attribute selection method based on subspace learning. The subspace learning technology is additionally used on an embedded type attribute selection frame, and the capacity of attribute selection for conducting attribute reduction is improved through the efficient learning capacity of the subspace technology. The embedded type attribute selection method includes the steps of (1) giving condition attributes of training sets and corresponding class labels, and building an objective function, with the LDA function and the LPP function, of the embedded type attribute selection method; (2) optimizing the objective function to obtain an optimized coefficient matrix; (3) excluding attributes with the importance degree equal to zero according to the characteristics of the obtained coefficient matrix; (4) sending the class labels, obtained in the step (3), of the condition attributes to a support vector machine for classifying or regression analysis, and obtaining selection results. By means of the embedded type attribute selection method, the problem in actual application of high-dimensional big data can be solved, and dimensionality-reduction data can be applied to various fields such as classifying or regression or missed data filling.

Description

Embedded attribute selection method and application thereof based on sub-space learning
Technical field
The present invention relates to large data, the specifically Data Reduction of high dimensional data or attribute reduction, more specifically while use attribute selects sum of subspace study to carry out the method, particularly embedded attribute selection method and the application thereof based on sub-space learning of attribute reduction.
Background technology
The practical application of large data age often touches high dimensional data, for example text classification, and computer vision, image retrieval, the dimension of the application datas such as genetic analysis can be tieed up several thousand dimensions even to several ten thousand dimensions [1] from hundreds of.Although current computing machine can usually face a lot of problems directly in the enterprising row operation of high dimensional data, for example the processing time long, dimension disaster problem, and the problems such as existence [2-6] of noise or redundant attributes.And existing research has shown " inherence " dimension very low [7,9] conventionally of high dimensional data.Therefore, high dimensional data being carried out to dimension-reduction treatment (being called for short Data Reduction-Dimensionality Reduction) is a focus of current data mining and machine learning research to find out " inherence " dimension of data.
Existing attribute reduction method is divided into two large classes conventionally, and attribute is selected (Feature Selection) sum of subspace study (Subspace Learning) [8,10].First attribute selection method carries out importance assignment according to certain criterion to each attribute, reaches attribute dimensionality reduction object afterwards by deleting the low attribute of importance degree.Attribute selection method is divided into filtration method, package method and embedding inlay technique three classes [8].First filtration method analyzes the roughly feature of data, and then carefully the specific features value of assessment data sees whether need attribute to filter.In practical application, filter method is a kind of non-learning method, and its robustness is stronger, but often filters out some important attributes [10].First package method defines a good learning algorithm (for example support vector machine), then find accordingly a subset of attribute, package method in practical application can find the most useful all attributes in principle, so its effect is conventionally good than filtration method; Yet package method needs calculated amount more greatly and easily occurs the situation of overfitting [8].Embedding inlay technique is carried out attribute simultaneously and is selected when setting up model, and the objective function in logical Optimization Learning model obtains effective attribute.Therefore the effect that, its attribute is selected is very good.Embedding inlay technique is less and occur the probability also little [10] of overfitting than package method calculated amount.Attribute selection method has been applied to the fields [2,3,4,5] such as gene studies, medical image analysis.
Sub-space learning reaches attribute dimensionality reduction object by the low dimension data that certain criterion is set original high dimensional data is projected to new space.Sub-space learning method is divided into reflection method and two kinds of manifold learning methods [8].Reflection method finds a low-dimensional data set that comprises most of primary data information (pdi)s by maximizing goal-selling function, and can find the transition matrix of original high dimensional data and new low point dimension data.The reflection method existing comprises principal component analysis (PCA) (PCA) and independent component analysis (ICA) etc.First epidemiology learning method supposes that original high dimensional data is positioned at the stream shape of a low-dimensional, then by meeting some suitable objective functions, finds objective attribute target attribute.Common manifold learning comprises multidimensional scaling analysis (MDS), ISPMAP, local linear embedding inlay technique (LPP) and laplacian eigenmaps etc.Sub-space learning method has been widely used in face recognition, the computation vision fields such as Images Classification, multimedia data retrieval [1,6,7,9].
Attribute selects sum of subspace study to have obvious feature.Because soluble ability is strong, practical ranges is wide in attribute selection, and shortcoming is that the effect of attribute reduction is not as the effect of sub-space learning.Practical ranges is limited owing to not having interpretability for sub-space learning, but that learning ability after attribute reduction is obviously selected than attribute is strong.
List of references:
[1]X.Zhu,L.Zhang and Z.Huang,A Sparse Embedding and Least Variance Encoding Approach to Hashing,to appear in IEEE Transactions on Image Processing,2014.
[2]X.Zhu,H.I.Suk,and D.Shen:A Novel Matrix-Similarity Based Loss Function for Joint Regression and Classification in AD Diagnosis.NeuroImage,2014.
[3]X.Zhu,H.I.Suk,and D.Shen:A Novel Multi-Relation Regularization Method for Regression and Classification in AD Diagnosis.In Proceedings of MICCAI 2014.
[4]X.Zhu,H.I.Suk,and D.Shen:Multi-Modality Canonical Feature Selection for Alzheimer's Disease Diagnosis.In Proceedings of MICCAI2014.
[5]X.Zhu,H.I.Suk,and D.Shen:Matrix-Similarity Based Loss Function and Feature Selection for Alzheimer's Disease Diagnosis.In Proceedings of CVPR 2014.
[6]X.Zhu,Z.Huang,H.Cheng,J.Cui and H.T.Shen."Sparse Hashing for Fast Multimedia Search".ACM Transactions on Information Systems(TOIS),31(2),2013.
[7]X.Zhu,Z.Huang,J.Cui,H.T.Shen."Video-to-Shot Tag Propagation by Graph Sparse Group Lasso".IEEE Transactions on Multimedia(TMM),15(3):633-646,2013.
[8]X.Zhu,Z.Huang,Y.Yang,H.T.Shen,C.Xu,J.Luo:Self-taught dimensionality reduction on the high-dimensional small-sized data.Pattern Recognition 46(1):215-229(2013
[9]X.Zhu,Z.Huang,H.T.Shen,and X.Zhao,Linear Cross-Modal Hashing for Effective Multimedia Search.In Proceedings of ACM MM,143-152,2013.
[10]X.Zhu,Z.Huang,H.T.Shen,J.Cheng and C.Xu."Dimensionality reduction by mixed kernel canonical correlation analysis″.Pattern Recognition,45(8):3003–3016,2012.
Summary of the invention
The present invention notices: (1) attribute selects in three kinds of methods that embedding inlay technique is the simplest and effect is best; (2) the manifold learning effect in sub-space learning is better, and can consider partial structurtes and the global structure of data simultaneously.Therefore in the embedding inlay technique framework that the present invention intends the powerful sub-space learning embedding attribute of learning ability to select, carry out attribute selection and be applied in the practical applications such as Medical Images Classification.
The object of the invention is for realistic application, carry out efficiently attribute reduction, effectively improve the efficiency of data mining and machine learning, the advantage of synthesized attribute selection and these two kinds of attribute reduction methods of sub-space learning, also avoid their shortcoming, and a kind of embedded attribute selection method based on sub-space learning is provided simultaneously.
Embedded attribute selection method based on sub-space learning of the present invention, comprises the steps:
1) model is set up: given training set conditional attribute and corresponding class label, set up an objective function with the Embedded attribute selection method of LDA function and LPP function;
2) optimize: objective function Optimization Steps 1), the matrix of coefficients after being optimized;
3) according to the feature of the matrix of coefficients of gained, the attribute that eliminating importance degree is 0;
4) analyze: by through step 3) after the conditional attribute class label that obtains deliver in support vector machine and classify or regretional analysis, obtain selection result.
Further, the objective function of step 1) setting up is:
min W | | Y - W T X | | F 2 + λ 1 tr ( W T XL X T W ) + λ 2 | | W | | 2,1 - - - ( 1 )
Wherein, X ∈ R d * nfor training set, Y ∈ R c * nfor class label, d, c and n are respectively dimension, class label number and the sample sizes of sample, λ 1, λ 2for regulating constant, λ 1for keeping the manifold structure of sample, by regulating λ 1keep the order of magnitude of LPP part and the order-of-magnitude agreement of norm part, λ 2be used for changing the sparse property of projection matrix W, λ 2larger, in W, be that 0 line number is just more entirely, the attribute that selection remains is fewer; And
L=D-S,
S=[S i,j]∈R n×n
D=[D i,j=Σ jS i,j]∈R n×n
In S, each element is wherein t is one and is greater than zero constant;
Y i , j = n n k - n k n , ifl ( x i ) = k - n k n , otherwise - - - ( 2 )
Y i,jfor the capable j column element of i of matrix Y, l (x i) expression x ia class label, n kit is k class number of samples.
Step 2 of the present invention) optimization comprises:
First, by following variation, in the upper utilization of formula (1), be similar to accelerating gradient method:
f ( W ) = 1 2 | | Y - W T X | | F 2 + λ 1 tr ( W T XL X T W ) - - - ( 3 )
θ(W)=f(W)+λ 2||W|| 2,1 (4)
Here, f (W) be protruding can be micro-, λ 1|| W|| 2,1although be protruding but non-smooth;
Secondly, utilize following Optimality Criteria to come iteration to upgrade W:
W ( t + 1 ) = arg min W G η ( t ) ( W , W ( t ) ) - - - ( 5 )
G &eta; ( t ) ( W , W ( t ) ) = f ( W ( t ) + < &dtri; f ( W ( t ) , W - W ( t ) > + &eta; ( t ) 2 | | W - W ( t ) | | F 2 + &lambda; 2 | | W | | 2 1 - - - ( 6 )
&dtri; f ( W ( t ) ) = ( XX T + &lambda; 1 XL X T ) W ( t ) - X Y T - - - ( 7 )
η (t) and W (t) are respectively the W values after tuning parameter and t iteration.
Then, ignore the condition independence of W in formula (5), wushu (5) is rewritten as again:
W ( t + 1 ) = &pi; &eta; ( t ) W ( ( t ) ) = arg min W 1 2 | | W - U ( t ) | | 2 2 + &lambda; 2 &eta; ( t ) | | W | | 2,1 - - - ( 8 )
Wherein π η (t)(W (t)) is the Euclid projection of W (t) on convex set η (t);
Finally, every a line of W (t+1) is upgraded to weight w i(t+1):
w i ( t + 1 ) = arg min w i 1 2 | | w i - u i | | 2 2 + &lambda; 2 &eta; ( t ) | | w i | | 2 - - - ( 9 )
Wherein, u i ( t ) = w i ( t ) - 1 &eta; ( t ) &dtri; f ( w i ( t ) ) The i that is respectively U (t) and W (t) is capable, when | | u i ( t ) | | 2 2 > &lambda; 2 &eta; ( t ) Time, w i * = ( 1 - &lambda; 2 &eta; ( t ) | | u ( t ) i | | 2 2 ) u i ( t ) , In other situations, value is 0.
For the approximate gradient method of quickening formula (5), the present invention is in step 2) an introducing auxiliary variable V (t+1):
V ( t + 1 ) = W ( t ) + &alpha; ( t ) - 1 &alpha; ( t + 1 ) ( W ( t + 1 ) - W ( t ) ) - - - ( 10 )
Wherein factor alpha (t+1) is got conventionally
The given training set conditional attribute of the present invention matrix X and corresponding class label matrix (or decision attribute matrix) Y, from machine learning angle, the present invention wishes to find a projection matrix W to make to obtain a prediction matrix W after the conversion of former higher-dimension attribute matrix X through W tx.And W tx value more approaches actual value Y, and projection matrix W is better, because needs carry out attribute selection, a λ for the present invention 2norm is done the regularization factor of this objective function, the objective function so obtaining is Embedded attribute selection method.
In order to add sub-space learning composition in above-mentioned objective function, the present invention is based on following observation: the of overall importance of data considered in (1) linear discriminant analysis (LDA); (2) locality preserving projections (LPP) keeps the partial structurtes of data; (3) the of overall importance and Local Property of simultaneously considering data obtain single consideration they one of have a better learning ability.So the partial structurtes that the present invention keeps the global structure (being LDA) of data and keeps data by introducing a LPP regularization factor by defining a new class label matrix Y, obtain target function type (1).
Because formula (2) makes formula (1) have the function of LDA to redefining of class label, keep the global structure of data; In addition first regularization factor lambda 1be LPP, formula (1) has the function of LPP, so formula (1) has comprised two class sub-space learnings.Again due to second regularization factor lambda of formula (1) 2make formula (1) complete the function that attribute is selected.
Formula (1) is a protruding but non-smooth function, so the present invention solves this problem by a kind of approximate speedup gradient method.
The present invention obtains the matrix of coefficients W between X and Y by optimized-type (1), obtains optimum coefficient of relationship W.An attribute in the corresponding X of the every row of W, due to formula (1) λ 2the effect of norm makes W have the advantages that row is sparse, and some row is 0 entirely.Now illustrate that this journey is inessential, attribute can be deleted in selecting.
The present invention sets up an embedded attribute preference pattern that comprises two sub spaces learning types.Sub-space learning in model is partly the learning ability of selecting for improving attribute, and adopting the sparse embedded attribute preference pattern of row is for making the result of attribute reduction have soluble ability.
On the other hand, the present invention also provides the application of the embedded attribute selection method based on sub-space learning described above.
The concrete grammar of this application is: use ten folding cross validation methods, repeat the described step 10 time of the embedded attribute selection method based on sub-space learning, the attribute of at every turn choosing is weighted on average, then chooses 10 attributes often.
Embodiment
The present invention is sub-data set of free download from ADNI website.This data set comprises 202 samples (51 patients of senile dementia (AD), 49 normal samples (NC), 43 have converted the case (MCI-C) of patients of senile dementia and the patient (MCI-NC) of 56 mild cognitive impairments to).Wherein each sample has class label and MRI (magnetic resonance) and PET (positron scanning) view data.The present invention is forming MRI i.e. 202 x 186 of a high position data collection together with PET two class data characteristics strings, and wherein MRI and PET are 93 dimensions, represent 93 region-of-interests of mind map sheet subsidiary factory.Target of the present invention is that in this 93 region of searching, which region is helpful to the diagnosis of senile dementia, and which does not have help.Design the embedded attribute selection method based on sub-space learning herein for this reason.After this data set is carried out to attribute selection, the present invention uses support vector machine to carry out three class Question Classifications to data, i.e. AD vs.MCI (comprising MCI-C and MCI-NC) vs.NC.
First, extract the feature of MRI and PET image, use 93 dimensional features to represent this two classes image.
Secondly, a subset selecting at random the MRI+PET data of 202 samples is the X in formula (1), and its corresponding class label is the Y in formula (1).Optimization method optimized-type (1) according to the present invention, obtains optimum coefficient of relationship W.The W of gained has following feature:
W = 0.1 0.2 0.9 0 0 0 0 0.7 0.6 0.3 0.9 0.01 0 0 0 .
According to above formula, W knows, the quadratic sum of every row is the importance degree of each attribute in X, and for example the second row and fourth line are 0, represents that second of X that it is corresponding and the 5th genus degree are 0.Therefore, when attribute is selected, can delete these two attributes and retain three remaining attributes to avoid the impact of noise on study.Then, execute data set X that attribute selects and corresponding class label Y and be sent in support vector machine and classify or regression test, obtain test result.
Finally, use ten folding cross validation methods to repeat above-mentioned three step 10 times, the attribute of at every turn choosing is weighted on average.Choose 10 attributes often, be used for analyzing with medical diagnosis result.
The region that edition with parallel text invention obtains and the area discover of clinical diagnose, the region of the present invention except finding clinical medicine to find, and found clinical medicine not find but the region that other scientific and technical literatures find.
Utilize the region that the present invention finds to set up sorter, the classification results obtaining is all more effective than current all methods, and this demonstration the present invention is effectively and can practical application.

Claims (6)

1. the embedded attribute selection method based on sub-space learning, comprises the steps:
1) model is set up: given training set conditional attribute and corresponding class label, set up an objective function with the Embedded attribute selection method of LDA function and LPP function;
2) optimize: objective function Optimization Steps 1), the matrix of coefficients after being optimized;
3) according to the feature of the matrix of coefficients of gained, the attribute that eliminating importance degree is 0;
4) analyze: by through step 3) after the conditional attribute class label that obtains deliver in support vector machine and classify or regretional analysis, obtain selection result.
2. method according to claim 1, is characterized in that: step 1) objective function set up is:
min W | | Y - W T X | | F 2 + &lambda; 1 tr ( W T XL X T W ) + &lambda; 2 | | W | | 2,1 - - - ( 1 )
Wherein, X ∈ R d * nfor training set, Y ∈ R c * nfor class label, d, c and n are respectively dimension, class label number and the sample sizes of sample, λ 1, λ 2for regulating constant, λ 1for keeping the manifold structure of sample, by regulating λ 1keep the order of magnitude of LPP part and the order-of-magnitude agreement of norm part, λ 2be used for changing the sparse property of projection matrix W, λ 2larger, in W, be that 0 line number is just more entirely, the attribute that selection remains is fewer; And
L=D-S,
S=[S i,j]∈R n×n,
D=[D i,j=Σ jS i,j]∈R n×n
In S, each element is wherein t is one and is greater than zero constant;
Y i , j = n n k - n k n , ifl ( x i ) = k - n k n , otherwise - - - ( 2 )
Y i, jfor the capable j column element of i of matrix Y, l (x i) expression x ia class label, n kit is k class number of samples.
3. method according to claim 1, is characterized in that: step 2) optimization comprise:
First, by following variation, in the upper utilization of formula (1), be similar to accelerating gradient method:
f ( W ) = 1 2 | | Y - W T X | | F 2 + &lambda; 1 tr ( W T XL X T W ) - - - ( 3 )
θ(W)=f(W)+λ 2||W|| 2,1 (4)
Here, f (W) be protruding can be micro-, λ 1|| W|| 2,1although be protruding but non-smooth;
Secondly, utilize following Optimality Criteria to come iteration to upgrade W:
W ( t + 1 ) = arg min W G &eta; ( t ) ( W , W ( t ) ) - - - ( 5 )
G &eta; ( t ) ( W , W ( t ) ) = f ( W ( t ) + < &dtri; f ( W ( t ) , W - W ( t ) > + &eta; ( t ) 2 | | W - W ( t ) | | F 2 + &lambda; 2 | | W | | 2 1 - - - ( 6 )
&dtri; f ( W ( t ) ) = ( XX T + &lambda; 1 XL X T ) W ( t ) - X Y T - - - ( 7 )
η (t) and W (t) are respectively the W values after tuning parameter and t iteration;
Then, ignore the condition independence of W in formula (5), wushu (5) is rewritten as again:
W ( t + 1 ) = &pi; &eta; ( t ) W ( ( t ) ) = arg min W 1 2 | | W - U ( t ) | | 2 2 + &lambda; 2 &eta; ( t ) | | W | | 2,1 - - - ( 8 )
Wherein π η (t)(W (t)) is the Euclid projection of W (t) on convex set η (t);
Finally, every a line of W (t+1) is upgraded to weight w i(t+1):
w i ( t + 1 ) = arg min w i 1 2 | | w i - u i | | 2 2 + &lambda; 2 &eta; ( t ) | | w i | | 2 - - - ( 9 )
Wherein, u i ( t ) = w i ( t ) - 1 &eta; ( t ) &dtri; f ( w i ( t ) ) The i that is respectively U (t) and W (t) is capable, when | | u i ( t ) | | 2 2 > &lambda; 2 &eta; ( t ) Time, w i * = ( 1 - &lambda; 2 &eta; ( t ) | | u ( t ) i | | 2 2 ) u i ( t ) , In other situations, value is 0.
4. method according to claim 3, is characterized in that: in step 2) an introducing auxiliary variable V (t+1):
V ( t + 1 ) = W ( t ) + &alpha; ( t ) - 1 &alpha; ( t + 1 ) ( W ( t + 1 ) - W ( t ) ) - - - ( 10 )
Wherein factor alpha (t+1) is got conventionally
5. the application of the embedded attribute selection method based on sub-space learning described in claim 1~4 any one.
6. according to the application of claim 5, it is characterized in that: use ten folding cross validation methods, the described step 10 of the embedded attribute selection method of repetition based on sub-space learning time, the attribute of at every turn choosing is weighted on average, then chooses 10 attributes often.
CN201410416253.7A 2014-08-22 2014-08-22 Embedded type attribute selection method based on subspace learning and application of embedded type attribute selection method based on subspace learning Pending CN104200077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410416253.7A CN104200077A (en) 2014-08-22 2014-08-22 Embedded type attribute selection method based on subspace learning and application of embedded type attribute selection method based on subspace learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410416253.7A CN104200077A (en) 2014-08-22 2014-08-22 Embedded type attribute selection method based on subspace learning and application of embedded type attribute selection method based on subspace learning

Publications (1)

Publication Number Publication Date
CN104200077A true CN104200077A (en) 2014-12-10

Family

ID=52085370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410416253.7A Pending CN104200077A (en) 2014-08-22 2014-08-22 Embedded type attribute selection method based on subspace learning and application of embedded type attribute selection method based on subspace learning

Country Status (1)

Country Link
CN (1) CN104200077A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108476084A (en) * 2016-12-02 2018-08-31 华为技术有限公司 The method and apparatus on adjustment state space boundary in Q study
CN108647707A (en) * 2018-04-25 2018-10-12 北京旋极信息技术股份有限公司 Probabilistic neural network creation method, method for diagnosing faults and device, storage medium
CN112183617A (en) * 2020-09-25 2021-01-05 电子科技大学 RCS sequence feature extraction method for sample and class label maximum correlation subspace
CN113935376A (en) * 2021-10-13 2022-01-14 中国科学技术大学 Brain function subregion partitioning method based on joint constraint canonical correlation analysis

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108476084A (en) * 2016-12-02 2018-08-31 华为技术有限公司 The method and apparatus on adjustment state space boundary in Q study
CN108476084B (en) * 2016-12-02 2020-05-08 华为技术有限公司 Method and device for adjusting state space boundary in Q learning
CN108647707A (en) * 2018-04-25 2018-10-12 北京旋极信息技术股份有限公司 Probabilistic neural network creation method, method for diagnosing faults and device, storage medium
CN108647707B (en) * 2018-04-25 2022-09-09 北京旋极信息技术股份有限公司 Probabilistic neural network creation method, failure diagnosis method and apparatus, and storage medium
CN112183617A (en) * 2020-09-25 2021-01-05 电子科技大学 RCS sequence feature extraction method for sample and class label maximum correlation subspace
CN112183617B (en) * 2020-09-25 2022-03-29 电子科技大学 RCS sequence feature extraction method for sample and class label maximum correlation subspace
CN113935376A (en) * 2021-10-13 2022-01-14 中国科学技术大学 Brain function subregion partitioning method based on joint constraint canonical correlation analysis
CN113935376B (en) * 2021-10-13 2023-03-10 中国科学技术大学 Brain function subregion partitioning method based on joint constraint canonical correlation analysis

Similar Documents

Publication Publication Date Title
Zhu et al. Low-rank sparse subspace for spectral clustering
Mnih Machine learning for aerial image labeling
CN112368712A (en) Classification and localization based on annotation information
CN112262395A (en) Classification based on annotation information
US9053393B2 (en) Learning method and apparatus for pattern recognition
Yan et al. Sparse discriminative feature selection
CN105608478B (en) image feature extraction and classification combined method and system
Wang et al. Person re-identification in identity regression space
Escalera et al. Boosted Landmarks of Contextual Descriptors and Forest-ECOC: A novel framework to detect and classify objects in cluttered scenes
Du et al. Multiple graph unsupervised feature selection
Puig et al. Application-independent feature selection for texture classification
CN103577839B (en) A kind of neighborhood keeps differentiating embedding face identification method and system
CN104484886A (en) Segmentation method and device for MR image
US8412757B2 (en) Non-negative matrix factorization as a feature selection tool for maximum margin classifiers
CN104200077A (en) Embedded type attribute selection method based on subspace learning and application of embedded type attribute selection method based on subspace learning
Ding et al. Single sample per person face recognition with KPCANet and a weighted voting scheme
Huang et al. Locality-regularized linear regression discriminant analysis for feature extraction
CN110598740B (en) Spectrum embedding multi-view clustering method based on diversity and consistency learning
Chen et al. Image classification based on convolutional denoising sparse autoencoder
Wang et al. Class specific or shared? a cascaded dictionary learning framework for image classification
Ji et al. Balance between object and background: Object-enhanced features for scene image classification
Wang et al. Fine-grained correlation analysis for medical image retrieval
Zhang et al. Fast local representation learning via adaptive anchor graph for image retrieval
Akbar et al. Face recognition using hybrid feature space in conjunction with support vector machine
Ade et al. Heart disease prediction system using svm and naive bayes

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141210