CN111666967B - Image classification method based on incoherence combined dictionary learning - Google Patents

Image classification method based on incoherence combined dictionary learning Download PDF

Info

Publication number
CN111666967B
CN111666967B CN202010316589.1A CN202010316589A CN111666967B CN 111666967 B CN111666967 B CN 111666967B CN 202010316589 A CN202010316589 A CN 202010316589A CN 111666967 B CN111666967 B CN 111666967B
Authority
CN
China
Prior art keywords
dictionary
class
shared
training
updating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010316589.1A
Other languages
Chinese (zh)
Other versions
CN111666967A (en
Inventor
李胜
马悦
何熊熊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010316589.1A priority Critical patent/CN111666967B/en
Publication of CN111666967A publication Critical patent/CN111666967A/en
Application granted granted Critical
Publication of CN111666967B publication Critical patent/CN111666967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

An image classification method based on incoherence combined dictionary learning is characterized in that a class dictionary is trained for each class of images, a shared dictionary is trained for all images, low rank property of the shared dictionary is guaranteed to prevent the shared dictionary from absorbing class characteristics, and coherence constraint items are added between the low rank shared dictionary and the class dictionary to prevent the shared characteristics from appearing in the class dictionary. The method increases the discriminant of the training dictionary, improves the sparse representation capability of the dictionary, and further improves the accuracy of image classification.

Description

Image classification method based on incoherence combined dictionary learning
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image classification method based on incoherence combined dictionary learning.
Background
In recent years, sparse representation has achieved great success in the field of image processing, such as image classification, image denoising, compressed sensing, etc., which represents a signal as a linear combination of a few atoms in a redundant dictionary. In the sparse representation process, the training dictionary largely determines the quality of the sparse representation capability.
Currently, researchers have proposed various dictionary training methods to improve sparse representation capabilities. The simplest dictionary training directly uses all samples as a dictionary, such as Sparse Representation Classification (SRC), but when the training samples are too large, the complexity of the algorithm is too high and there is great redundancy in training the dictionary. K-SVD is a classical dictionary training algorithm that does not work well in image classification due to the fact that the class characteristics of the training samples are not introduced during the dictionary training process. As an improvement, class-oriented dictionary training algorithms D-KSVD, DLSI and FDDL were proposed. For example: the D-KSVD introduces a linear classifier to make the coding coefficients of the same sample more similar, and the coding coefficient difference of different samples is increased; DLSI introduces coherence constraint among class dictionaries to improve the discrimination capability of the dictionaries; FDDL adds a fisher discriminant term to increase the discriminant ability of the coding coefficients. Furthermore, experimental studies have found that although different classes of dictionaries have their own unique features, they share some features, so researchers have proposed class dictionary and shared dictionary joint training methods LRSDL, COPAR. The LRSDL prevents the shared dictionary from absorbing the characteristics of the class dictionary by ensuring the low rank property of the shared dictionary, so that the discriminant of the dictionary is improved. However, during the joint dictionary training process, shared features may appear in the class dictionary, which may reduce the accuracy of image classification.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the image classification method based on the incoherence combined dictionary, which prevents shared features from appearing in the class dictionary so as to improve the discrimination capability of the dictionary, optimize the sparse representation capability of the image and further improve the accuracy of image classification.
The technical scheme of the invention is as follows:
an image classification method based on an incoherence joint dictionary is characterized in that a class dictionary is trained for each class of images, a shared dictionary is trained for all images, the low rank property of the shared dictionary is guaranteed to avoid the shared dictionary from absorbing the characteristics of the class dictionary, a coherent constraint item is added between the low rank shared dictionary and the class dictionary to prevent the shared characteristics from appearing in the class dictionary, and the model is as follows:
Figure GDA0004101210800000021
Figure GDA0004101210800000022
Figure GDA0004101210800000023
/>
wherein the training sample
Figure GDA0004101210800000024
Comprises class C->
Figure GDA0004101210800000025
Represents a class c training sample,/->
Figure GDA0004101210800000026
Figure GDA0004101210800000027
Is a training dictionary,/->
Figure GDA0004101210800000028
Representing a class c training dictionary,/->
Figure GDA0004101210800000029
Representing a shared dictionary->
Figure GDA00041012108000000210
Is a class dictionary that is used to store the data,
Figure GDA00041012108000000211
is a matrix of coding coefficients>
Figure GDA00041012108000000212
Is Y c Coding coefficients of the corresponding dictionary D, < >>
Figure GDA00041012108000000213
Is Y c Corresponding dictionary D c Coding coefficient of>
Figure GDA00041012108000000214
Is X c Remove->
Figure GDA00041012108000000216
Given a matrix a and a natural number n, let μ (a, n) be a matrix of n identical columns, each column being the average vector of all columns in a, let>
Figure GDA00041012108000000215
Further, updating variables in the dictionary model with alternating iterative solutions includes: random gradient descent method for updating class dictionary
Figure GDA0004101210800000031
Iterative updating of shared dictionary D by alternate direction multiplier method C+1 The fast soft threshold iterative algorithm updates the coding coefficient matrix X as follows:
1) Updating class dictionary
Figure GDA0004101210800000032
Assume that
Figure GDA0004101210800000033
Then->
Figure GDA0004101210800000034
When fixed X, D C+1 Solving +.>
Figure GDA0004101210800000035
Figure GDA0004101210800000036
Figure GDA0004101210800000037
wherein ,
Figure GDA0004101210800000038
given a matrix T of the type T,
Figure GDA0004101210800000039
is defined as follows:
Figure GDA00041012108000000310
taking into account the advantages of low complexity and fast running time of SGD by random gradient descent method, SGD update is used
Figure GDA00041012108000000311
Figure GDA00041012108000000312
Here alpha 1 Is the step size of the gradient descent;
2) Updating shared dictionary D C+1
Let x= [ X ] 1 ;...;X c ;...;X C ;X C+1 ],
Figure GDA00041012108000000313
Figure GDA00041012108000000314
And
Figure GDA00041012108000000315
coding coefficients of training samples under class c dictionary and shared dictionary, respectively, when +.>
Figure GDA00041012108000000316
X fixed time, update shared dictionary D C+1
Figure GDA00041012108000000317
Figure GDA0004101210800000041
/>
Figure GDA0004101210800000042
I=S(X C+1 ) T ;J=X C+1 (X C+1 ) T
Updating D using alternate direction multiplier method C+1 Iteratively updating formulas (6) - (8) until convergence, and solving a nuclear norm minimization problem by a singular value threshold algorithm;
Figure GDA0004101210800000043
Figure GDA0004101210800000044
Figure GDA0004101210800000045
U=U+D C+1 -V (8)
Figure GDA0004101210800000046
θ is a singular value contraction operator, E represents an identity matrix, α 2 Representing the step size of the gradient descent. Updating D in equation (6) by adopting random gradient descent method C+1
3) Updating coding coefficient matrix X
When dictionary D is fixed, a fast soft threshold iterative algorithm is used to update X, assuming
Figure GDA0004101210800000047
In each iterative update, the derivative of s (X) is required;
Figure GDA0004101210800000048
Figure GDA0004101210800000049
wherein
Figure GDA00041012108000000410
The beneficial effects of the invention are as follows: a new dictionary learning method is provided for image classification, the shared features are prevented from appearing in the class dictionary by the coherence constraint items between the low-rank shared dictionary and the class dictionary, the discriminant of the training dictionary is improved, and the sparse representation capability of the image is optimized, so that the accuracy of the image classification is increased.
Drawings
FIG. 1 is a dictionary sparse representation model in accordance with the present invention.
Fig. 2 is a flow chart of the classification of experimental images according to the present invention.
FIG. 3 is a graph of experimental data set described in the present invention, wherein (a) represents an AR generator data set and (b) is a COIL-100 image of the data set.
FIG. 4 is a graph showing the effect of shared dictionary size on the overall classification accuracy of AR geneder data in the present invention.
FIG. 5 is a graph showing the effect of shared dictionary size on overall classification accuracy of COIL-100 data in the present invention.
Detailed Description
The invention will be further described with reference to specific examples, but the scope of the invention is not limited thereto.
Referring to fig. 1 to 5, in an image classification method based on incoherence combined dictionary learning, since sharing features are included between different types of images, the invention trains one class dictionary for each type of image and trains one shared dictionary for all images. In addition, the low rank property of the shared dictionary is ensured to avoid the shared dictionary from absorbing the characteristics of the class dictionary, and a coherent constraint item is added between the low rank shared dictionary and the class dictionary to prevent the shared characteristics from appearing in the class dictionary.
Figure GDA0004101210800000051
Figure GDA0004101210800000061
Figure GDA0004101210800000062
Wherein the training sample
Figure GDA0004101210800000063
Comprises class C->
Figure GDA0004101210800000064
Represents a class c training sample,/->
Figure GDA0004101210800000065
Figure GDA0004101210800000066
Is a training dictionary,/->
Figure GDA0004101210800000067
Representing a class c training dictionary,/->
Figure GDA0004101210800000068
Representing a shared dictionary->
Figure GDA0004101210800000069
Is a class dictionary that is used to store the data,
Figure GDA00041012108000000610
is a matrix of coding coefficients>
Figure GDA00041012108000000611
Is Y c Coding coefficients of the corresponding dictionary D, < >>
Figure GDA00041012108000000612
Is Y c Corresponding dictionary D c Coding coefficient of>
Figure GDA00041012108000000613
Is X c Remove->
Figure GDA00041012108000000614
Given a matrix a and a natural number n, let μ (a, n) be a matrix of n identical columns, each column being the average vector of all columns in a, let>
Figure GDA00041012108000000615
In order to minimize dictionary objective functions, the class dictionaries are preferably updated separately using an alternating iterative solution method
Figure GDA00041012108000000616
Shared dictionary D C+1 A sparse coding matrix X;
1) Updating class dictionary
Figure GDA00041012108000000617
Assume that
Figure GDA00041012108000000618
Then->
Figure GDA00041012108000000619
When fixed X, D C+1 Solving +.>
Figure GDA00041012108000000620
Figure GDA00041012108000000621
Figure GDA00041012108000000622
wherein ,
Figure GDA00041012108000000623
given a matrix T of the type T,
Figure GDA00041012108000000624
is defined as follows:
Figure GDA0004101210800000071
/>
taking into account the advantages of low complexity and fast running time of SGD by random gradient descent method, SGD update is used
Figure GDA0004101210800000072
Figure GDA0004101210800000073
Here alpha 1 Is the step size of the gradient descent;
2) Updating shared dictionary D C+1
Let x= [ X ] 1 ;...;X c ;...;X C ;X C+1 ],
Figure GDA0004101210800000074
Figure GDA0004101210800000075
And
Figure GDA0004101210800000076
coding coefficients of training samples under class c dictionary and shared dictionary, respectively, when +.>
Figure GDA0004101210800000077
X fixed time, update shared dictionary D C+1
Figure GDA0004101210800000078
Figure GDA0004101210800000079
Figure GDA00041012108000000710
I=S(X C+1 ) T ;J=X C+1 (X C+1 ) T
Updating D using alternate direction multiplier method C+1 Iteratively updating formulas (6) - (8) until convergence, and solving a nuclear norm minimization problem by a singular value threshold algorithm;
Figure GDA00041012108000000711
Figure GDA00041012108000000712
Figure GDA00041012108000000713
U=U+D C+1 -V (8)
Figure GDA0004101210800000081
θ is a singular value contraction operator, E represents an identity matrix, α 2 Step size representing gradient descent, updating D in formula (6) by adopting random gradient descent method C+1
3) Updating coding coefficient matrix X
When dictionary D is fixed, a fast soft threshold iterative algorithm is used to update X, assuming
Figure GDA0004101210800000083
In each iterative update, the derivative of s (X) is required;
Figure GDA0004101210800000084
Figure GDA0004101210800000085
/>
wherein
Figure GDA0004101210800000086
Classification was performed on two classes of image dataset, including the AR gener database, the COIL-100 database. The experimental procedure is divided into five steps: firstly, preprocessing all data sets, and reducing the interference of training samples on experiments; secondly, extracting the characteristics of each image in order to extract useful information of the image and reduce the dimension of the characteristics of the image; furthermore, the dictionary learning method is adopted to train the discriminant dictionary; then, sparsely coding the image on the trained dictionary; and finally, classifying the image according to the reconstruction error and the dictionary coding item to obtain the Overall Classification Accuracy (OCA).
Figure GDA0004101210800000087
AR gene experiment: AR gender face dataset images are divided into two categories, each containing 700 pictures. The dimension of each image after feature extraction is 300, the size of each class of dictionary in the invention is 300, the size of the shared dictionary is set to be 3, and the influence of the size of the shared dictionary on classification accuracy is shown in figure 4. The classification accuracy obtained by various dictionary learning methods is shown in the table one.
Dictionary learningMethod Classification accuracy (%)
SRC 91
COPAR 93.56
LRSDL 92.86
DLSI 93.86
FDDL 91.71
Proposed Method 94.56
TABLE 1
As shown in Table 1, the classification accuracy obtained by adopting the dictionary training method of the invention on the data set AR gener is 94.56%, which is improved by 0.7% compared with the suboptimal algorithm DLSI, and is improved by 3.56%, 1%, 1.7% and 2.85% compared with the SRC, COPAR, LRSDL, FDDL dictionary learning method.
COIL-100 experiment: the COIL-100 image dataset contained 100 classes, each containing 72 pictures, with each image having dimensions 324 after feature extraction. In the invention, the size of each class of dictionary is 45, the size of the shared dictionary is 3, and the influence of the size of the shared dictionary on the whole classification precision is shown in figure 5.
The overall classification accuracy obtained by various dictionary learning methods is shown in a second table.
Dictionary learning method Overall classification accuracy (%)
SRC 89.61
COPAR 90.29
LRSDL 91.76
DLSI 92.94
FDDL 88.82
Proposed Method 93.53
TABLE 2
As can be seen from Table 2, the overall classification accuracy obtained by the dictionary training method of the invention on the COIL-100 data set is 93.53%, which is improved by 0.59% compared with the suboptimal algorithm DLSI, and is respectively improved by 3.92%, 3.24%, 1.77% and 4.71% compared with the SRC, COPAR, LRSDL, FDDL dictionary learning method.
The foregoing is considered as illustrative of the principles of the present invention, and has been described herein before with reference to the accompanying drawings, in which the invention is not limited to the specific embodiments shown.
What is not described in detail in this specification is prior art known to those skilled in the art.

Claims (1)

1. An image classification method based on incoherence joint dictionary learning is characterized by comprising the following steps:
firstly, preprocessing all data sets;
secondly, extracting the characteristics of each image;
furthermore, training a discriminant dictionary by adopting a dictionary learning method;
then, sparsely coding the image on the trained dictionary;
finally, classifying the images according to the reconstruction errors and dictionary coding items;
the process of training the discriminant dictionary by adopting the dictionary learning method is as follows:
training a class dictionary for each class of images, training a shared dictionary for all images, ensuring the low rank property of the shared dictionary, and adding a coherent constraint term between the low rank shared dictionary and the class dictionary, wherein the model is as follows:
Figure FDA0004101210790000011
Figure FDA0004101210790000012
Figure FDA0004101210790000013
wherein the training sample
Figure FDA0004101210790000014
Comprises class C->
Figure FDA0004101210790000015
Represents a class c training sample,/->
Figure FDA0004101210790000016
Figure FDA0004101210790000017
Is a training dictionary,/->
Figure FDA0004101210790000018
Representing a class c training dictionary,/->
Figure FDA0004101210790000019
Representing a shared dictionary->
Figure FDA00041012107900000110
Is a class dictionary that is used to store the data,
Figure FDA00041012107900000111
is a matrix of coding coefficients>
Figure FDA00041012107900000112
Is Y c Coding coefficients of the corresponding dictionary D, < >>
Figure FDA00041012107900000113
Is Y c Corresponding dictionary D c Coding coefficient of>
Figure FDA00041012107900000114
Is X c Remove->
Figure FDA00041012107900000115
Given a matrix a and a natural number n, let μ (a, n) be a matrix of n identical columns, each column being the average vector of all columns in a, let
Figure FDA00041012107900000116
Alternating iterative solutions to update variables in the dictionary model includes: the dictionary is updated by adopting a random gradient descent method, the shared dictionary is iteratively updated by adopting an alternate direction multiplier method, and the encoding coefficient matrix is updated by adopting a rapid soft threshold iterative algorithm, wherein the method comprises the following steps:
1) Updating class dictionary
Figure FDA00041012107900000117
Assume that
Figure FDA00041012107900000118
Then->
Figure FDA00041012107900000119
When fixed X, D C+1 Solving +.>
Figure FDA00041012107900000120
Figure FDA00041012107900000121
Figure FDA00041012107900000122
wherein ,
Figure FDA00041012107900000123
given a matrix T of the type T,
Figure FDA00041012107900000124
is defined as follows: />
Figure FDA0004101210790000021
SGD update using random gradient descent
Figure FDA0004101210790000022
Figure FDA0004101210790000023
Here alpha 1 Is the step size of the gradient descent;
2) Updating shared dictionary D C+1
Let x= [ X ] 1 ;...;X c ;...;X C ;X C+1 ],
Figure FDA0004101210790000024
Figure FDA0004101210790000025
And
Figure FDA0004101210790000026
coding coefficients of training samples under class c dictionary and shared dictionary, respectively, when +.>
Figure FDA0004101210790000027
X fixed time, update shared dictionary D C+1
Figure FDA0004101210790000028
Figure FDA0004101210790000029
Updating D using alternate direction multiplier method C+1 Iteratively updating formulas (6) - (8) until convergence, and solving a nuclear norm minimization problem by a singular value threshold algorithm;
Figure FDA00041012107900000210
V=θ(D C+1 +U) (7)
U=U+D C+1 -V (8)
Figure FDA00041012107900000212
θ is a singular value contraction operator, E represents an identity matrix, α 2 Step size representing gradient descent, updating D in formula (6) by adopting random gradient descent method C+1
3) Updating coding coefficient matrix X
When dictionary D is fixed, a fast soft threshold iterative algorithm is used to update X, assuming
Figure FDA00041012107900000213
In each iterative update, the derivative of s (X) is required,
Figure FDA0004101210790000031
/>
Figure FDA0004101210790000032
/>
CN202010316589.1A 2020-04-21 2020-04-21 Image classification method based on incoherence combined dictionary learning Active CN111666967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010316589.1A CN111666967B (en) 2020-04-21 2020-04-21 Image classification method based on incoherence combined dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010316589.1A CN111666967B (en) 2020-04-21 2020-04-21 Image classification method based on incoherence combined dictionary learning

Publications (2)

Publication Number Publication Date
CN111666967A CN111666967A (en) 2020-09-15
CN111666967B true CN111666967B (en) 2023-06-13

Family

ID=72382643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010316589.1A Active CN111666967B (en) 2020-04-21 2020-04-21 Image classification method based on incoherence combined dictionary learning

Country Status (1)

Country Link
CN (1) CN111666967B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778807A (en) * 2016-11-22 2017-05-31 天津大学 The fine granularity image classification method of dictionary pair is relied on based on public dictionary pair and class
CN106815876A (en) * 2016-12-30 2017-06-09 清华大学 Image sparse characterizes the combined optimization training method of many dictionary learnings
US9865036B1 (en) * 2015-02-05 2018-01-09 Pixelworks, Inc. Image super resolution via spare representation of multi-class sequential and joint dictionaries
CN108573263A (en) * 2018-05-10 2018-09-25 西安理工大学 A kind of dictionary learning method of co-ordinative construction rarefaction representation and low-dimensional insertion
CN108985177A (en) * 2018-06-21 2018-12-11 南京师范大学 A kind of facial image classification method of the quick low-rank dictionary learning of combination sparse constraint
CN109409201A (en) * 2018-09-05 2019-03-01 昆明理工大学 A kind of pedestrian's recognition methods again based on shared and peculiar dictionary to combination learning
CN110705343A (en) * 2019-08-20 2020-01-17 西南科技大学 Face recognition method and system for structure-incoherent projection dictionary pair learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9865036B1 (en) * 2015-02-05 2018-01-09 Pixelworks, Inc. Image super resolution via spare representation of multi-class sequential and joint dictionaries
CN106778807A (en) * 2016-11-22 2017-05-31 天津大学 The fine granularity image classification method of dictionary pair is relied on based on public dictionary pair and class
CN106815876A (en) * 2016-12-30 2017-06-09 清华大学 Image sparse characterizes the combined optimization training method of many dictionary learnings
CN108573263A (en) * 2018-05-10 2018-09-25 西安理工大学 A kind of dictionary learning method of co-ordinative construction rarefaction representation and low-dimensional insertion
CN108985177A (en) * 2018-06-21 2018-12-11 南京师范大学 A kind of facial image classification method of the quick low-rank dictionary learning of combination sparse constraint
CN109409201A (en) * 2018-09-05 2019-03-01 昆明理工大学 A kind of pedestrian's recognition methods again based on shared and peculiar dictionary to combination learning
CN110705343A (en) * 2019-08-20 2020-01-17 西南科技大学 Face recognition method and system for structure-incoherent projection dictionary pair learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Discriminative Dictionary Learning with Low-Rank Regularization for Face Recognition;Liangyue Li,Sheng Li,Yun Fu;《FG 2013》;全文 *
Fast Low-Rank Shared Dictionary Learning for Image Classification;Tiep Huu Vu,and Vishal Monga;《TIP 2017》;正文第1-3节 *

Also Published As

Publication number Publication date
CN111666967A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CA3085033A1 (en) Methods and systems for multi-label classification of text data
EP3435247A1 (en) Electronic device and method for text processing
CN113806527A (en) Cross-language unsupervised classification with multi-view migration learning
Christlein et al. ICDAR 2019 competition on image retrieval for historical handwritten documents
CN114791958B (en) Zero sample cross-modal retrieval method based on variational self-encoder
CN113793319A (en) Fabric image flaw detection method and system based on class constraint dictionary learning model
CN109636722A (en) A method of the online dictionary learning super-resolution rebuilding based on rarefaction representation
CN106803105B (en) Image classification method based on sparse representation dictionary learning
CN111666967B (en) Image classification method based on incoherence combined dictionary learning
CN114841176A (en) Neural machine translation robustness enhancing method, device and storage medium
CN108388918B (en) Data feature selection method with structure retention characteristics
CN110378356B (en) Fine-grained image identification method based on multi-target Lagrangian regularization
CN111310807B (en) Feature subspace and affinity matrix joint learning method based on heterogeneous feature joint self-expression
CN109190645B (en) High-order high-dimensional image data representation and classification method
CN111931665B (en) Under-sampling face recognition method based on intra-class variation dictionary modeling
KR20190068205A (en) System and Method for Automatic Agent Assignment of Bug Reports
CN110109994B (en) Automobile financial wind control system containing structured and unstructured data
CN109299260B (en) Data classification method, device and computer readable storage medium
Zhang et al. Deep residual network based medical image reconstruction
CN112613282A (en) Text generation method and device and storage medium
CN111797732A (en) Video motion identification anti-attack method insensitive to sampling
Wang et al. Seanet: A deep learning architecture for data series similarity search
CN107958229B (en) Face recognition method, device and equipment based on neighbor keeping low-rank representation
US11763094B2 (en) Cascade pooling for natural language processing
CN108182429B (en) Method and device for extracting facial image features based on symmetry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant