CN108573263A - A Dictionary Learning Approach to Jointly Structured Sparse Representations and Low-Dimensional Embeddings - Google Patents
A Dictionary Learning Approach to Jointly Structured Sparse Representations and Low-Dimensional Embeddings Download PDFInfo
- Publication number
- CN108573263A CN108573263A CN201810444013.6A CN201810444013A CN108573263A CN 108573263 A CN108573263 A CN 108573263A CN 201810444013 A CN201810444013 A CN 201810444013A CN 108573263 A CN108573263 A CN 108573263A
- Authority
- CN
- China
- Prior art keywords
- dictionary
- matrix
- low
- coefficient
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013459 approach Methods 0.000 title description 2
- 239000011159 matrix material Substances 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 16
- 230000009467 reduction Effects 0.000 claims abstract description 14
- 238000010276 construction Methods 0.000 claims abstract description 9
- 230000014509 gene expression Effects 0.000 claims abstract description 9
- 238000003780 insertion Methods 0.000 claims abstract 5
- 230000037431 insertion Effects 0.000 claims abstract 5
- 238000012360 testing method Methods 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000003416 augmentation Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012562 intraclass correlation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
- Machine Translation (AREA)
Abstract
Description
技术领域technical field
本发明属于数字图像处理技术领域,具体涉及一种联合结构化稀疏表示与低维嵌入的字典学习方法。The invention belongs to the technical field of digital image processing, and in particular relates to a dictionary learning method combining structured sparse representation and low-dimensional embedding.
背景技术Background technique
稀疏表示的核心思想主要基于如下客观事实:自然界中的许多信号可以用一个过饱和字典中的仅有少数几个字典项来线性组合表示或编码。稀疏表示研究中最为关键的问题是具有强表示能力字典的构造。目前,稀疏表示技术在许多应用领域中均有广泛的应用,如图像分类、人脸识别和人体动作识别等。The core idea of sparse representation is mainly based on the following objective fact: many signals in nature can be represented or encoded by linear combination with only a few dictionary items in an oversaturated dictionary. The most critical issue in sparse representation research is the construction of dictionaries with strong representation capabilities. At present, sparse representation technology is widely used in many application fields, such as image classification, face recognition and human action recognition.
字典学习致力于从训练样本中学习到最优字典以便对给定的信号或特征进行更好地表示或编码。对于基于稀疏表示的分类识别来说,训练样本在字典下的理想稀疏矩阵应该是块对角的,即样本在其所在的同类子字典上的系数为非零,而在异类子字典上的系数为零。这样的结构化系数矩阵将具有最好的类别区分能力。另外,由于训练数据的高维特性以及训练样本的不足使得字典学习方法仍然面临着极大的挑战。因此,人们很自然地在字典学习过程中引入了数据降维处理。但很不幸的是,这些字典学习方法往往将数据降维与字典学习作为两个独立的处理步骤来单独研究,即首先对训练数据进行降维处理,然后在低维特征空间中进行字典学习。这种串行的级联方式很有可能使得预先学习到的低维投影不能够很好地保持和提升数据的潜在稀疏结构,从而不利于具有强判别性的字典学习。Dictionary learning is dedicated to learning an optimal dictionary from training samples in order to better represent or encode a given signal or feature. For classification recognition based on sparse representation, the ideal sparse matrix of the training sample under the dictionary should be block diagonal, that is, the coefficient of the sample on the same sub-dictionary where it is located is non-zero, while the coefficient on the heterogeneous sub-dictionary is to zero. Such a structured coefficient matrix will have the best class discrimination ability. In addition, due to the high-dimensional nature of the training data and the lack of training samples, the dictionary learning method still faces great challenges. Therefore, people naturally introduce data dimensionality reduction processing in the dictionary learning process. But unfortunately, these dictionary learning methods often treat data dimensionality reduction and dictionary learning as two independent processing steps to study separately, that is, first perform dimensionality reduction processing on training data, and then perform dictionary learning in low-dimensional feature space. This serial cascading approach is likely to make the pre-learned low-dimensional projections unable to maintain and improve the underlying sparse structure of the data well, which is not conducive to dictionary learning with strong discriminative properties.
发明内容Contents of the invention
本发明的目的是提供一种联合结构化稀疏表示与低维嵌入的字典学习方法,解决了现有技术中存在的字典学习方法中由于训练样本数据的高维特性和缺乏严格块对角化结构约束的字典而使得所编码出的表示系数类别判别能力弱、区分性不强的问题。The purpose of the present invention is to provide a dictionary learning method that combines structured sparse representation and low-dimensional embedding, which solves the problem of the high-dimensional characteristics of training sample data and the lack of strict block diagonalization structure in the dictionary learning method existing in the prior art. Due to the restricted dictionary, the coded representation coefficients have weak discrimination ability and poor discrimination.
本发明所采用的技术方案是,一种联合结构化稀疏表示与低维嵌入的字典学习方法,具体按照以下步骤实施:The technical solution adopted in the present invention is a dictionary learning method combining structured sparse representation and low-dimensional embedding, which is specifically implemented according to the following steps:
步骤1、读入训练样本的特征数据集其中C为类别数,n为特征的维数,为第i类的Ni个样本所构成的特征子集,i=1,2,…,C, Step 1. Read in the feature data set of the training sample Where C is the number of categories, n is the dimension of the feature, is a feature subset composed of N i samples of the i-th class, i=1,2,...,C,
步骤2、采用交替方向Lagrange乘子法求解优化问题获得编码字典D、降维投影矩阵P和编码系数矩阵X;Step 2. Solve the optimization problem by using the alternate direction Lagrange multiplier method Obtain the coding dictionary D, dimensionality reduction projection matrix P and coding coefficient matrix X;
步骤3、读入测试样本特征数据由编码字典D和降维投影矩阵P并通过求解以下优化问题来获得测试样本的稀疏表示系数 Step 3. Read in the test sample characteristic data The test samples are obtained from the encoding dictionary D and the dimensionality reduction projection matrix P by solving the following optimization problem The sparse representation coefficient of
步骤4、计算测试样本的稀疏表示系数在各类别子字典Di上的重构误差ei:其中为对应于在第i个子字典Di上的编码系数D=[D1,D2,…,DC],i=1,2,…,C;Step 4. Calculate the sparse representation coefficient of the test sample The reconstruction error e i on each category sub-dictionary D i : in For the coding coefficient D=[D 1 , D 2 ,...,D C ] corresponding to the i-th sub-dictionary D i , i=1, 2,..., C;
步骤5、根据最小重构误差准则对测试样本进行分类,其类别标号为: Step 5, according to the minimum reconstruction error criterion for the test sample Classify, its category label for:
本发明的特点还在于,The present invention is also characterized in that,
步骤2中in step 2
s.t.X=diag(X11,X22,…,XCC),PPT=I,stX=diag(X 11 , X 22 , . . . , X CC ),PP T =I,
其中,参数λ1,λ2,λ3>0;为低维投影变换矩阵,m<<n;训练样本Y在字典下的表示系数矩阵为X:Wherein, parameters λ 1 , λ 2 , λ 3 >0; It is a low-dimensional projection transformation matrix, m<<n; the training sample Y is in the dictionary The following representation coefficient matrix is X:
为第j类训练样本Yj在第i类子字典上的表示系数,i,j∈{1,2,…,C}; For the j-th class training sample Y j the i-th class sub-dictionary On the expression coefficient, i,j∈{1,2,…,C};
令X满足如下的块对角化结构约束:Let X satisfy the following block diagonalization structural constraints:
步骤2具体按照以下步骤实施:Step 2 is specifically implemented according to the following steps:
步骤2.1、引入辅助变量集并令Zii=Xii,优化问题转化为:Step 2.1, introduce auxiliary variable set And let Zi ii =X ii , the optimization problem transform into:
s.t.X=diag(X11,X22,…,XCC),PPT=I,stX=diag(X 11 ,X 22 ,...,X CC ),PP T =I,
Zii=Xii,i=1,2,…,C,Z ii =X ii , i=1,2,...,C,
其增广的Lagrange函数表达式为:Its augmented Lagrange function expression is:
s.t.PPT=I,stPP T = I,
其中,Fii为Lagrange乘子,γ>0为惩罚参数;Among them, F ii is the Lagrange multiplier, and γ>0 is the penalty parameter;
步骤2.2、交替迭代更新矩阵P、D、X和Zii,直至P、D、X和Zii收敛。Step 2.2. Iteratively update the matrices P, D, X and Z ii alternately until P, D, X and Z ii converge.
步骤2.2具体按照以下步骤实施:Step 2.2 is specifically implemented according to the following steps:
步骤2.2.1、固定其它变量,通过下式更新矩阵X:Step 2.2.1, fix other variables, and update the matrix X by the following formula:
其中,sgn(x)定义为:Among them, sgn(x) is defined as:
步骤2.2.2、固定其它变量,通过下式更新矩阵Zii:Step 2.2.2, fix other variables, and update the matrix Z ii by the following formula:
其中,UΛVT为矩阵的奇异值分解,为软阈值算子,定义如下:Among them, UΛV T is the matrix The singular value decomposition of is the soft threshold operator, It is defined as follows:
步骤2.2.3、固定其它变量,通过下式更新矩阵D:Step 2.2.3, fix other variables, and update the matrix D through the following formula:
对字典D逐列按上式更新完后,即获得整个字典更新后的值:After updating the dictionary D column by column according to the above formula, the updated value of the entire dictionary is obtained:
步骤2.2.4、固定其它变量,通过下式更新矩阵P:Step 2.2.4, fix other variables, and update the matrix P by the following formula:
首先,对矩阵(φ(P(t-1))-λ1S)进行特征值分解:First, perform eigenvalue decomposition on the matrix (φ(P (t-1) )-λ 1 S):
[U,Λ,V]=EVD(φ(P(t-1))-λ1S),[U,Λ,V]=EVD(φ(P (t-1) )-λ 1 S),
其中,φ(P)=(Y-PTΔ)(Y-PTΔ)T,Δ=DX,S=YYT,Λ为矩阵(φ(P(t-1))-λ1S)的特征值所构成的对角阵,从而投影矩阵P的更新即为矩阵(φ(P(t-1))-λ1S)的前m个特征值所对应的特征向量U(1:m,:),即:Among them, φ(P)=(YP T Δ)(YP T Δ) T , Δ=DX, S=YY T , Λ is defined by the eigenvalues of the matrix (φ(P (t-1) )-λ 1 S) The diagonal matrix formed by , so that the update of the projection matrix P is the eigenvector U(1:m,:) corresponding to the first m eigenvalues of the matrix (φ(P (t-1) )-λ 1 S), which is:
P(t)=U(1:m,:);P (t) = U(1:m,:);
步骤2.2.5、通过下式更新乘子Fii及参数γ:Step 2.2.5, update the multiplier F ii and parameter γ by the following formula:
γ(t)=min{ργ(t-1),γmax}。γ (t) = min{ργ (t-1) ,γ max }.
其中,ρ=1.1,γmax=106,Wherein, ρ=1.1, γ max =10 6 ,
经过以上更新后得到编码字典D和降维投影矩阵P。After the above updates, the encoding dictionary D and the dimensionality reduction projection matrix P are obtained.
本发明的有益效果是,一种联合结构化稀疏表示与低维嵌入的字典学习方法,以消除类间的相关性来获得具有更好判别性能的编码系数;通过低秩约束来增强类内稀疏表示系数间的相干性,以进一步提升在类别子字典上的表示系数的类聚性;同时,通过投影矩阵的学习来增强字典的表示能力并进而改进稀疏表示模型的鲁棒性。The beneficial effect of the present invention is a dictionary learning method that combines structured sparse representation and low-dimensional embedding to obtain coding coefficients with better discriminative performance by eliminating inter-class correlation; enhance intra-class sparsity through low-rank constraints Represent the coherence between the coefficients to further improve the clustering of the representation coefficients on the category sub-dictionary; at the same time, through the learning of the projection matrix, the representation ability of the dictionary is enhanced and the robustness of the sparse representation model is improved.
具体实施方式Detailed ways
下面结合具体实施方式对本发明进行详细说明。The present invention will be described in detail below in combination with specific embodiments.
本发明一种联合结构化稀疏表示与低维嵌入的字典学习方法,字典构造与降维投影矩阵并行交替进行,通过在低维投影空间中强迫具有块对角化结构的稀疏表示系数矩阵来增强字典的类间不相干性,与此同时,利用类别子字典上的表示系数的低秩性来保持字典的类内相关性。字典构造与投影学习能够互相促进,以充分保持数据的稀疏结构,从而编码出更具有类别判别力的编码系数,具体按照以下步骤实施:The present invention is a dictionary learning method that combines structured sparse representation and low-dimensional embedding. The dictionary construction and the dimensionality reduction projection matrix are carried out in parallel and alternately, and the sparse representation coefficient matrix with a block diagonal structure is forced to be enhanced in the low-dimensional projection space. The inter-class incoherence of dictionaries, while maintaining the intra-class correlation of dictionaries by exploiting the low-rank property of the representation coefficients on the class sub-dictionary. Dictionary construction and projection learning can promote each other to fully maintain the sparse structure of the data, so as to encode more class-discriminative coding coefficients. Specifically, follow the steps below:
步骤1、读入训练样本的特征数据集其中C为类别数,n为特征的维数,为第i类的Ni个样本所构成的特征子集,i=1,2,…,C, Step 1. Read in the feature data set of the training sample Where C is the number of categories, n is the dimension of the feature, is a feature subset composed of N i samples of the i-th class, i=1,2,...,C,
步骤2、采用交替方向Lagrange乘子法求解优化问题获得编码字典D、降维投影矩阵P和编码系数矩阵X,其中,Step 2. Solve the optimization problem by using the alternate direction Lagrange multiplier method Obtain the coding dictionary D, the dimensionality reduction projection matrix P and the coding coefficient matrix X, where,
s.t.X=diag(X11,X22,…,XCC),PPT=I,stX=diag(X 11 , X 22 , . . . , X CC ),PP T =I,
其中,参数λ1,λ2,λ3>0;为低维投影变换矩阵,m<<n;训练样本Y在字典下的表示系数矩阵为X:Wherein, parameters λ 1 , λ 2 , λ 3 >0; It is a low-dimensional projection transformation matrix, m<<n; the training sample Y is in the dictionary The following representation coefficient matrix is X:
为第j类训练样本Yj在第i类子字典上的表示系数,i,j∈{1,2,…,C}; For the j-th class training sample Y j the i-th class sub-dictionary On the expression coefficient, i,j∈{1,2,…,C};
令X满足如下的块对角化结构约束:Let X satisfy the following block diagonalization structural constraints:
步骤2具体按照以下步骤实施:Step 2 is specifically implemented according to the following steps:
步骤2.1、引入辅助变量集并令Zii=Xii,优化问题转化为:Step 2.1, introduce auxiliary variable set And let Zi ii =X ii , the optimization problem transform into:
s.t.X=diag(X11,X22,…,XCC),PPT=I,stX=diag(X 11 ,X 22 ,...,X CC ),PP T =I,
Zii=Xii,i=1,2,…,C,Z ii =X ii , i=1,2,...,C,
其增广的Lagrange函数表达式为:Its augmented Lagrange function expression is:
s.t.PPT=I,stPP T = I,
其中,Fii为Lagrange乘子,γ>0为惩罚参数;Among them, F ii is the Lagrange multiplier, and γ>0 is the penalty parameter;
步骤2.2、交替迭代更新矩阵P、D、X和Zii,直至P、D、X和Zii收敛,具体按照以下步骤实施:Step 2.2, iteratively update the matrices P, D, X, and Z ii alternately until P, D, X, and Z ii converge. Specifically, follow the steps below:
步骤2.2.1、固定其它变量,通过下式更新矩阵X:Step 2.2.1, fix other variables, and update the matrix X by the following formula:
其中,sgn(x)定义为:Among them, sgn(x) is defined as:
步骤2.2.2、固定其它变量,通过下式更新矩阵Zii:Step 2.2.2, fix other variables, and update the matrix Z ii by the following formula:
其中,UΛVT为矩阵的奇异值分解,为软阈值算子,定义如下:Among them, UΛV T is the matrix The singular value decomposition of is the soft threshold operator, It is defined as follows:
步骤2.2.3、固定其它变量,通过下式更新矩阵D:Step 2.2.3, fix other variables, and update the matrix D through the following formula:
对字典D逐列按上式更新完后,即获得整个字典更新后的值:After updating the dictionary D column by column according to the above formula, the updated value of the entire dictionary is obtained:
步骤2.2.4、固定其它变量,通过下式更新矩阵P:Step 2.2.4, fix other variables, and update the matrix P by the following formula:
首先,对矩阵(φ(P(t-1))-λ1S)进行特征值分解:First, perform eigenvalue decomposition on the matrix (φ(P (t-1) )-λ 1 S):
[U,Λ,V]=EVD(φ(P(t-1))-λ1S),[U,Λ,V]=EVD(φ(P (t-1) )-λ 1 S),
其中,φ(P)=(Y-PTΔ)(Y-PTΔ)T,Δ=DX,S=YYT,Λ为矩阵(φ(P(t-1))-λ1S)的特征值所构成的对角阵,从而投影矩阵P的更新即为矩阵(φ(P(t-1))-λ1S)的前m个特征值所对应的特征向量U(1:m,:),即:Among them, φ(P)=(YP T Δ)(YP T Δ) T , Δ=DX, S=YY T , Λ is defined by the eigenvalues of the matrix (φ(P (t-1) )-λ 1 S) The diagonal matrix formed by , so that the update of the projection matrix P is the eigenvector U(1:m,:) corresponding to the first m eigenvalues of the matrix (φ(P (t-1) )-λ 1 S), which is:
P(t)=U(1:m,:);P (t) = U(1:m,:);
步骤2.2.5、通过下式更新乘子Fii及参数γ:Step 2.2.5, update the multiplier F ii and parameter γ by the following formula:
γ(t)=min{ργ(t-1),γmax}。γ (t) = min{ργ (t-1) ,γ max }.
其中,ρ=1.1,γmax=106,Wherein, ρ=1.1, γ max =10 6 ,
经过以上更新后得到编码字典D和降维投影矩阵P;After the above update, the encoding dictionary D and the dimensionality reduction projection matrix P are obtained;
步骤3、读入测试样本特征数据由编码字典D和降维投影矩阵P并通过求解以下优化问题来获得测试样本的稀疏表示系数 Step 3. Read in the test sample characteristic data The test samples are obtained from the encoding dictionary D and the dimensionality reduction projection matrix P by solving the following optimization problem The sparse representation coefficient of
步骤4、计算测试样本的稀疏表示系数在各类别子字典Di上的重构误差ei:其中为对应于在第i个子字典Di上的编码系数D=[D1,D2,…,DC],i=1,2,…,C;Step 4. Calculate the sparse representation coefficient of the test sample The reconstruction error e i on each category sub-dictionary D i : in For the coding coefficient D=[D 1 , D 2 ,...,D C ] corresponding to the i-th sub-dictionary D i , i=1, 2,..., C;
步骤5、根据最小重构误差准则对测试样本进行分类,其类别标号为: Step 5, according to the minimum reconstruction error criterion for the test sample Classify, its category label for:
本发明的一种联合结构化稀疏表示与低维嵌入的字典学习方法,在字典构建过程中充分利用了类别先验信息在低维空间中来对学习样本进行具有块结构化的稀疏编码,以使得编码后的系数具有更强的表示能力和类别区分能力,显著提高了分类问题的准确率。A dictionary learning method of the present invention that combines structured sparse representation and low-dimensional embedding makes full use of category prior information in the low-dimensional space to perform block-structured sparse coding on learning samples in the process of dictionary construction. The coded coefficients have stronger representation ability and category discrimination ability, which significantly improves the accuracy of the classification problem.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810444013.6A CN108573263A (en) | 2018-05-10 | 2018-05-10 | A Dictionary Learning Approach to Jointly Structured Sparse Representations and Low-Dimensional Embeddings |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810444013.6A CN108573263A (en) | 2018-05-10 | 2018-05-10 | A Dictionary Learning Approach to Jointly Structured Sparse Representations and Low-Dimensional Embeddings |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108573263A true CN108573263A (en) | 2018-09-25 |
Family
ID=63572539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810444013.6A Pending CN108573263A (en) | 2018-05-10 | 2018-05-10 | A Dictionary Learning Approach to Jointly Structured Sparse Representations and Low-Dimensional Embeddings |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108573263A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829352A (en) * | 2018-11-20 | 2019-05-31 | 中国人民解放军陆军工程大学 | Communication fingerprint identification method integrating multilayer sparse learning and multi-view learning |
CN110033824A (en) * | 2019-04-13 | 2019-07-19 | 湖南大学 | A kind of gene expression profile classification method based on shared dictionary learning |
CN111666967A (en) * | 2020-04-21 | 2020-09-15 | 浙江工业大学 | Image classification method based on incoherent joint dictionary learning |
CN112183300A (en) * | 2020-09-23 | 2021-01-05 | 厦门大学 | A method and system for identifying AIS radiation sources based on multi-level sparse representation |
CN112734763A (en) * | 2021-01-29 | 2021-04-30 | 西安理工大学 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
-
2018
- 2018-05-10 CN CN201810444013.6A patent/CN108573263A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829352A (en) * | 2018-11-20 | 2019-05-31 | 中国人民解放军陆军工程大学 | Communication fingerprint identification method integrating multilayer sparse learning and multi-view learning |
CN109829352B (en) * | 2018-11-20 | 2024-06-11 | 中国人民解放军陆军工程大学 | Communication fingerprint identification method integrating multilayer sparse learning and multi-view learning |
CN110033824A (en) * | 2019-04-13 | 2019-07-19 | 湖南大学 | A kind of gene expression profile classification method based on shared dictionary learning |
CN111666967A (en) * | 2020-04-21 | 2020-09-15 | 浙江工业大学 | Image classification method based on incoherent joint dictionary learning |
CN111666967B (en) * | 2020-04-21 | 2023-06-13 | 浙江工业大学 | An Image Classification Method Based on Incoherent Joint Dictionary Learning |
CN112183300A (en) * | 2020-09-23 | 2021-01-05 | 厦门大学 | A method and system for identifying AIS radiation sources based on multi-level sparse representation |
CN112183300B (en) * | 2020-09-23 | 2024-03-22 | 厦门大学 | AIS radiation source identification method and system based on multi-level sparse representation |
CN112734763A (en) * | 2021-01-29 | 2021-04-30 | 西安理工大学 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
CN112734763B (en) * | 2021-01-29 | 2022-09-16 | 西安理工大学 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108573263A (en) | A Dictionary Learning Approach to Jointly Structured Sparse Representations and Low-Dimensional Embeddings | |
Gao et al. | Sparse representation with kernels | |
CN104408478B (en) | A Hyperspectral Image Classification Method Based on Hierarchical Sparse Discriminative Feature Learning | |
CN105868796B (en) | The design method of linear discriminant rarefaction representation classifier based on nuclear space | |
CN107194378B (en) | Face recognition method and device based on mixed dictionary learning | |
CN110889865B (en) | Video target tracking method based on local weighted sparse feature selection | |
CN106066992B (en) | Face recognition method and system based on discriminative dictionary learning based on adaptive local constraints | |
CN116612281A (en) | Text Supervised Image Semantic Segmentation System for Open Vocabulary | |
CN110705636B (en) | Image classification method based on multi-sample dictionary learning and local constraint coding | |
Sun et al. | Self-adaptive feature learning based on a priori knowledge for facial expression recognition | |
CN107832747A (en) | A kind of face identification method based on low-rank dictionary learning algorithm | |
CN117475278A (en) | Guided vehicle-centered multi-modal pre-training system and method based on structural information | |
CN105740790A (en) | Multicore dictionary learning-based color face recognition method | |
CN106529586A (en) | Image classification method based on supplemented text characteristic | |
CN114511901A (en) | Age classification assisted cross-age face recognition algorithm | |
CN104318214A (en) | Cross view angle face recognition method based on structuralized dictionary domain transfer | |
CN108460400A (en) | A kind of hyperspectral image classification method of combination various features information | |
Chen et al. | Semi-supervised dictionary learning with label propagation for image classification | |
CN117952151A (en) | A semantic vector model pre-training method based on multi-mask approach | |
CN114943862B (en) | Two-stage image classification method based on structural analysis dictionary learning | |
CN107358249A (en) | The hyperspectral image classification method of dictionary learning is differentiated based on tag compliance and Fisher | |
CN107239732A (en) | A kind of tired expression recognition method based on Gabor characteristic and rarefaction representation | |
CN104166860B (en) | The face identification method towards single test sample based on constraint | |
Yao et al. | Principal component dictionary-based patch grouping for image denoising | |
CN110781822B (en) | SAR image target recognition method based on self-adaptive multi-azimuth dictionary pair learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180925 |