CN110852345B - Image classification method - Google Patents

Image classification method Download PDF

Info

Publication number
CN110852345B
CN110852345B CN201910934378.1A CN201910934378A CN110852345B CN 110852345 B CN110852345 B CN 110852345B CN 201910934378 A CN201910934378 A CN 201910934378A CN 110852345 B CN110852345 B CN 110852345B
Authority
CN
China
Prior art keywords
sparse reconstruction
sparse
data
matrix
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910934378.1A
Other languages
Chinese (zh)
Other versions
CN110852345A (en
Inventor
胡瑞瑞
潘喆琼
龙正雄
崔航凯
俞巧楠
贺阳
毛倩倩
杨建立
赵婉芳
严晓昇
孔旭锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority to CN201910934378.1A priority Critical patent/CN110852345B/en
Publication of CN110852345A publication Critical patent/CN110852345A/en
Application granted granted Critical
Publication of CN110852345B publication Critical patent/CN110852345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image classification method, which comprises the following steps: s1: carrying out uniform module division on the high-dimensional matrix data and forming third-order tensor data; s2: taking the sparse reconstruction weight mean value of the selected module as the sparse reconstruction weight of the whole two-dimensional data; s3: extracting unsupervised sparse reconstruction feature information and supervised paired constraint feature information respectively; s4: performing self-adaptive setting of linear weighting parameters; s5: calculating the sparse reconstruction weight of each module, obtaining a corresponding sparse reconstruction error, and calculating the average value of the sparse reconstruction weight; s6: the obtained average value is used as a sparse reconstruction weight of the whole data set and is applied to solving of tensor sparse retention projection matrixes on the obtained third-order tensor data; s7: and completing classification of the test samples by using a support vector machine. The invention integrates paired constraint characteristic information and sparse reconstruction characteristic information to the greatest extent, and realizes the self-adaptive setting of linear weighting parameters.

Description

Image classification method
Technical Field
The invention relates to the technical field of image processing, in particular to the technical field of visual classification, and particularly relates to an image classification method.
Background
To more accurately describe the shortcomings of the sparse retention projection SPP and the pairwise constraint directed feature projection PCFP, a two-dimensional dataset of one class is first created. The two types of data are equal in quantity and are respectively represented by dots and triangular points. And projecting the sparse preserving projection SPP and the feature projection PCFP guided by the pair constraint on the two-dimensional data set to obtain a projected one-dimensional space. Fig. 1 shows a specific projection embedding space description, in which solid dots and solid triangular dots represent class label data. The bold dashed line in the figure represents the one-dimensional embedding space after projection of the feature projection PCFP through the sparse retention projection SPP and the pairwise constraint guidance. FIG. 1-1 shows a one-dimensional projection embedding space description on an initial dataset; 1-2 illustrate a one-dimensional projected embedded spatial description of a dataset after a change in annotation; FIGS. 1-3 show one-dimensional projection embedding space descriptions on a longitudinally stretched doubled dataset.
Compared with the sample data of fig. 1-1, the labeling sample data of fig. 1-2 changes, but the unsupervised sparsity keeps the one-dimensional embedding space of the projection SPP projection unchanged, while the one-dimensional embedding space of the feature projection PCFP projection guided by the supervised pairwise constraints changes, and the classification effect is obviously inferior to that of fig. 1-1. The sample data of fig. 1-3 are elongated by 2 times longitudinally, so that the overall structure of the sample is changed, but the projected one-dimensional embedded data space generated by the feature projection PCFP guided by the pair constraint in fig. 1-3 is not affected, and the projected one-dimensional embedded data space generated by the projection SPP is greatly affected by the sparse maintenance, so that the classification effect is greatly reduced. From fig. 1, we can see that sparsely populated projection SPPs are susceptible to global pattern variations of the sample, and that pairwise constraint-directed feature projections PCFP are relatively sensitive to constituent samples of the pairwise constraint set. Thus fusing sparse retention projection SPP and pairwise constraint directed feature projection PCFP will effectively retain their respective characteristics and overcome their drawbacks.
Disclosure of Invention
In order to solve the above problems, the present invention provides an image classification method.
An image classification method comprising the steps of:
s1: carrying out uniform module division on the high-dimensional matrix data and forming third-order tensor data;
s2: taking the sparse reconstruction weight mean value of the selected module as the sparse reconstruction weight of the whole two-dimensional data;
s3: extracting unsupervised sparse reconstruction feature information and supervised paired constraint feature information respectively;
s4: two kinds of characteristic information are fused in a linear mode, and the self-adaptive setting of linear weighting parameters is carried out through a genetic algorithm;
s5: calculating the sparse reconstruction weight of each module and obtaining a corresponding sparse reconstruction error, selecting a plurality of modules with the sparse reconstruction error smaller than the average sparse reconstruction error, and calculating the average value of the sparse reconstruction weight;
s6: the obtained average value is used as a sparse reconstruction weight of the whole data set and is applied to solving of tensor sparse retention projection matrixes on the obtained third-order tensor data;
s7: and completing classification of the test samples by using a support vector machine.
Preferably, the selected module is a module with a sparse reconstruction error lower than the average value of the sparse reconstruction errors of all the modules.
Preferably, the calculation of the unsupervised sparse reconstruction feature information includes:
Figure BDA0002221222120000021
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The number of training samples is representedAccording to x i Represents the I-th sample set, I represents the identity matrix, s= { S 1 ,s 2 ...,s n Sparse reconstruction weight mean matrix of modules of X, s i Represents x i Sparse reconstruction weight mean matrix of a module of samples.
Preferably, the calculation of the supervised pairwise constraint characteristic information includes:
Figure BDA0002221222120000031
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data, x i Represents the i-th sample set, x j Represents the j-th sample set, ml= { (x) i ,x j )|x i and xj Belongs to the same class } represents a positive constraint set, cl= { (x) i ,x j )|x i and xj Not belonging to the same class } represents a negative constraint set, I represents an identity matrix, t= { T 1 ,t 2 ...,t n Constrained projection matrix denoted X, t i Represents x i Constrained projection of a sample set.
Preferably, the calculation of the fusion of the two kinds of characteristic information in a linear manner includes:
Figure BDA0002221222120000032
wherein ,
Figure BDA0002221222120000033
S α =S+S T -S T S,
βXX T +(1-β)I=1,
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data is represented by ml= { (x) i ,x j )|x i and xj Belonging to the same oneClass } represents a positive constraint set, cl= { (x) i ,x j )|x i and xj Not belonging to the same class } represents a negative constraint set, I represents an identity matrix, t= { T 1 ,t 2 ...,t n Constrained projection matrix denoted X, t i Represents x i Constrained projection of sample set, s= { S 1 ,s 2 ...,s n The sparse reconstruction weight mean matrix of the module of X is denoted, and β represents the linear weighting parameter.
Preferably, the adaptive setting of the linear weighting parameter by the genetic algorithm includes:
binary chromosome coding is adopted for the linear weighting parameter beta population, and the selection, crossover and mutation of a genetic algorithm are utilized for next generation selection until a global optimal individual is obtained as the linear weighting parameter beta.
Preferably, the solving of the tensor sparse preserving projection matrix using the obtained average value as the sparse reconstruction weight of the whole data set and applied to the obtained third-order tensor data includes:
conversion to generalized eigen-problem solution:
(βS a +(1-β)P a )w=λ(βXX T +(1-β)I)w,
S α =S+S T -S T S,
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data is represented by }, the identity matrix is represented by I, and s= { S 1 ,s 2 ...,s n Sparse reconstruction weight mean matrix of modules of X;
obtain projection matrix W= [ W ] 1 ,w 2 ,...,w d ]。
Preferably, the classifying the test sample by using the support vector machine includes:
using projection matrix w= [ W ] 1 ,w 2 ,...,w d ]Projection data is calculated and classified using a support vector machine.
The invention has the following beneficial effects:
1. the invention integrates paired constraint characteristic information and sparse reconstruction characteristic information to the greatest extent, and solves the problem of insufficient supervision information in the existing face sparse representation dimension reduction algorithm;
2. the invention solves the problem that parameters in the existing face sparse representation dimension reduction algorithm cannot be automatically set, and the invention realizes the self-adaptive setting of linear weighting parameters through a genetic algorithm.
Drawings
The invention will be described in further detail with reference to the drawings and the detailed description.
FIG. 1 shows a schematic view of a projection embedding space description of a sparse retention projection SPP and a pairwise constraint-directed feature projection PCFP in a two-dimensional dataset; wherein FIG. 1-1 shows a one-dimensional projection embedding space description schematic on an initial dataset; 1-2 show a one-dimensional projected embedded space description schematic of a dataset after a change in annotation; FIGS. 1-3 show a schematic illustration of one-dimensional projection embedding space on a longitudinally stretched doubled dataset;
FIG. 2 is a flow chart of an image classification method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an octant and constructed third-order tensor image of a high-dimensional matrix face image in an image classification method according to an embodiment of the present invention; wherein, the figure (a) represents a face image, the figure (b) represents an image divided into uniform 4 blocks, and the figure (c) represents a third-order tensor image;
FIG. 4 is a flowchart of taking a sparse reconstruction weight mean of a selected module as a sparse reconstruction weight of whole two-dimensional data in an image classification method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of taking a sparse reconstruction weight mean of a selected module as a sparse reconstruction weight of whole two-dimensional data in an image classification method according to an embodiment of the present invention;
fig. 6 is a framework diagram of automatic optimization setting of linear weighting parameters based on Genetic Algorithm (GA) in an image classification method according to an embodiment of the present invention.
Detailed Description
The technical scheme of the present invention will be further described with reference to the accompanying drawings, but the present invention is not limited to these examples.
The basic idea of the embodiment is to provide a visual classification method, which realizes semi-supervised sparse face classification of modularized paired constraint characteristic information and sparse reconstruction characteristic information. The algorithm firstly carries out blocking on the face, extracts unsupervised sparse reconstruction feature information and supervised paired constraint feature information respectively, carries out fusion of the two feature information in a linear mode, carries out self-adaptive setting of linear weighting parameters through a genetic algorithm, obtains a projection matrix through solving of a generalized feature problem to carry out projection dimension reduction, and finally classifies feature data subjected to dimension reduction through a support vector machine. The method overcomes the respective defects of sparse representation and paired constraint of the face, solves the problems that the supervision information in the existing face sparse representation dimension reduction algorithm is insufficient and the parameters cannot be automatically set, and fuses paired constraint characteristic information and sparse reconstruction characteristic information to the greatest extent.
Based on the above-mentioned ideas, an embodiment of the present invention proposes an image classification method, as shown in fig. 2, comprising the following steps:
s1: carrying out uniform module division on the high-dimensional matrix data and forming third-order tensor data;
s2: taking the sparse reconstruction weight mean value of the selected module as the sparse reconstruction weight of the whole two-dimensional data;
s3: extracting unsupervised sparse reconstruction feature information and supervised paired constraint feature information respectively;
s4: two kinds of characteristic information are fused in a linear mode, and the self-adaptive setting of linear weighting parameters is carried out through a genetic algorithm;
s5: calculating the sparse reconstruction weight of each module and obtaining a corresponding sparse reconstruction error, selecting a plurality of modules with the sparse reconstruction error smaller than the average sparse reconstruction error, and calculating the average value of the sparse reconstruction weight;
s6: the obtained average value is used as a sparse reconstruction weight of the whole data set and is applied to solving of tensor sparse retention projection matrixes on the obtained third-order tensor data;
s7: and completing classification of the test samples by using a support vector machine.
Fig. 3 shows a schematic diagram of an octant and constructed third-order tensor image of a high-dimensional matrix face image, wherein the figure (a) shows the face image, the figure (b) shows the image divided into uniform 8 blocks, and the figure (c) shows the third-order tensor image. The third-order tensor data constructed by the uniform modules includes not only the spatial relationships of the individual module elements but also the spatial relationships between the modules.
On the basis of module sparse representation, selecting a module with a sparse reconstruction error lower than the average value of the sparse reconstruction errors of all modules, and taking the average value of the sparse reconstruction weights of the selected module as the sparse reconstruction weight of the whole two-dimensional data.
Fig. 4 shows a flowchart taking the sparse reconstruction weight mean of the selected module as the sparse reconstruction weight of the entire two-dimensional data. Firstly, dividing an image data set into uniform 8-block image data sets, obtaining a sparse reconstruction weight and a sparse reconstruction error of each module through a sparse reconstruction computer, selecting a plurality of modules with errors smaller than an average sparse reconstruction error and the sparse reconstruction weight, and taking the average value of the sparse reconstruction weights of the selected modules as the sparse reconstruction weight of the whole two-dimensional data. Taking fig. 5 as an example, the image data set is divided into 8 uniform image data sets, and the sparse reconstruction weight and the sparse reconstruction error S of each module are obtained through a sparse reconstruction computer 1 、β 1 、S 2 、β 2 、S 3 、β 3 、S 4 、β 4 、S 5 、β 5 、S 6 、β 6 、S 7 、β 7 、S 8 、β 8 Selecting a plurality of modules with errors smaller than the average sparse reconstruction error and obtaining S by sparse reconstruction weight 2 、S 3 、S 4 、S 6 Finally, taking the sparse reconstruction weight mean value of the selected module as the sparse reconstruction weight of the whole two-dimensional data
Figure BDA0002221222120000071
In this embodiment, the calculation of the unsupervised sparse reconstruction feature information includes:
Figure BDA0002221222120000072
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data, x i Represents the I-th sample set, I represents the identity matrix, s= { S 1 ,s 2 ...,s n Sparse reconstruction weight mean matrix of modules of X, s i Represents x i Sparse reconstruction weight mean matrix of a module of samples.
In this embodiment, the supervised computation of pairwise constraint characteristic information includes:
Figure BDA0002221222120000073
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data, x i Represents the i-th sample set, x j Represents the j-th sample set, ml= { (x) i ,x j )|x i and xj Belongs to the same class } represents a positive constraint set, cl= { (x) i ,x j )|x i and xj Not belonging to the same class } represents a negative constraint set, I represents an identity matrix, t= { T 1 ,t 2 ...,t n Constrained projection matrix denoted X, t i Represents x i Constrained projection of a sample set.
In this embodiment, the calculation of the fusion of the two feature information in a linear manner includes:
Figure BDA0002221222120000081
wherein ,
Figure BDA0002221222120000082
S α =S+S T -S T S,
βXX T +(1-β)I=1,
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data is represented by ml= { (x) i ,x j )|x i and xj Belongs to the same class } represents a positive constraint set, cl= { (x) i ,x j )|x i and xj Not belonging to the same class } represents a negative constraint set, I represents an identity matrix,
T={t 1 ,t 2 ...,t n constrained projection matrix denoted X, t i Represents x i Constrained projection of sample set, s= { S 1 ,s 2 ...,s n The sparse reconstruction weight mean matrix of the module of X is denoted, and β represents the linear weighting parameter.
The invention integrates paired constraint characteristic information and sparse reconstruction characteristic information to the greatest extent, and solves the problem of insufficient supervision information in the existing face sparse representation dimension reduction algorithm.
Since the weighted balance parameter β has uncertainty in the search direction in the [0,1] space, adaptive search direction adjustment under specific guidance is required. The invention repeatedly utilizes the global search characteristic of a Genetic Algorithm (GA) to finish the automatic optimization setting of the information fusion weighting balance parameter beta of the sparse reconstruction characteristic information and the paired constraint characteristic information under different training sample numbers and characteristic numbers. The basic idea of automatic optimization setting of the weighting parameters β is to use binary chromosome coding for β populations and to use Genetic Algorithm (GA) selection, crossover and mutation for next generation selection until globally optimal individuals are obtained as the weighting parameters β.
Fig. 6 shows a framework diagram of the automatic optimization setting of linear weighting trade-off parameters based on Genetic Algorithm (GA). Firstly, extracting sparse reconstruction information and paired constraint information, carrying out linear weighting information fusion to obtain an evaluation function, projecting the evaluation function into a low-dimensional space, and calculating the fitness value of an individual by applying a certain data statistical classification method. For the beta population: the first generation, the second generation, …, the n-1 generation and the n generation adopt binary chromosome coding, then obtain fitness value through decoding and applying an evaluation function, select the first half of individuals with high fitness value to directly enter the next generation, select the second half of individuals to select according to a wheel disc, sequentially cross at two points randomly and mutate at single point randomly through a genetic algorithm, then decode and apply the evaluation function to obtain fitness value, if the optimal value is unchanged for three generations, then decode and output balance parameter beta, if the optimal value is unchanged for three generations, then carry out circulation again through binary chromosome coding.
The invention solves the problem that parameters in the existing face sparse representation dimension reduction algorithm cannot be automatically set, and the invention realizes the self-adaptive setting of linear weighting parameters through a genetic algorithm.
In the present embodiment, the solving of the tensor sparse preserving projection matrix using the obtained average value as the sparse reconstruction weight of the entire data set and applied to the obtained third-order tensor data includes:
conversion to generalized eigen-problem solution:
(βS a +(1-β)P a )w=λ(βXX T +(1-β)I)w,
S α =S+S T -S T S,
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data is represented by }, the identity matrix is represented by I, and s= { S 1 ,s 2 ...,s n Sparse reconstruction weight mean matrix of modules of X;
obtain projection matrix W= [ W ] 1 ,w 2 ,...,w d ]。
In this embodiment, the classification of the test sample using the support vector machine includes:
using projection matrix w= [ W ] 1 ,w 2 ,...,w d ]Projection data is calculated and classified using a support vector machine.
In machine learning, a support vector machine (English: support vector machine, often abbreviated as SVM) is a supervised learning model and associated learning algorithm that analyzes data in classification and regression analysis. Given a set of training instances, each labeled as belonging to one or the other of two classes, the SVM training algorithm creates a model that assigns the new instance to one of the two classes, making it a non-probabilistic binary linear classifier. The SVM model represents instances as points in space such that the mapping causes the instances of individual classes to be separated by as wide a distinct interval as possible. The new instances are then mapped to the same space and the belonging categories are predicted based on which side of the interval they fall on. The support vector machine is used for classifying the data according to the present embodiment as the prior art, and therefore, the description thereof will not be repeated.
The fused dimension reduction algorithm not only contains paired constraint information of the samples, but also maintains global sparse reconstruction information of the samples. Furthermore, since they are two kinds of feature information that are different in nature, how to set parameters of information fusion of the two in order to obtain optimal dimension-reduction classification performance is also a key issue when faced with different data.
From the aspect of feature information fusion, a Genetic Algorithm (GA) firstly extracts unsupervised sparse reconstruction feature information and supervised paired constraint feature information respectively. And then information fusion is carried out in a linear weighting mode, and weighting parameters are set through a genetic algorithm. And finally, solving and obtaining a projection matrix through a generalized characteristic problem. The projected low-dimensional data can not only keep global geometry and local neighbor information contained in sparse reconstruction, but also better keep constraint information among the data.
The advantages of this algorithm are represented by:
(1) The linear weighted fusion mode effectively inherits the respective characteristics of paired constraint and sparse representation, and the fused algorithm not only maintains the paired constraint relation of the samples, but also maintains the global sparse reconstruction relation of the samples;
(2) And a genetic algorithm is introduced to estimate the weighting parameters of the characteristic information linear fusion, so that the algorithm can automatically and adaptively acquire the optimal weighting parameters according to the requirements of different data sets and dimensions, and the optimal characteristic information fusion performance can be obtained.
Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (8)

1. An image classification method, characterized by comprising the steps of:
s1: carrying out uniform module division on the high-dimensional matrix data and forming third-order tensor data;
s2: taking the sparse reconstruction weight mean value of the selected module as the sparse reconstruction weight of the whole two-dimensional data;
s3: extracting unsupervised sparse reconstruction feature information and supervised paired constraint feature information respectively;
s4: two kinds of characteristic information are fused in a linear mode, and the self-adaptive setting of linear weighting parameters is carried out through a genetic algorithm;
s5: calculating the sparse reconstruction weight of each module and obtaining a corresponding sparse reconstruction error, selecting a plurality of modules with the sparse reconstruction error smaller than the average sparse reconstruction error, and calculating the average value of the sparse reconstruction weight;
s6: the obtained average value is used as a sparse reconstruction weight of the whole data set and is applied to solving of tensor sparse retention projection matrixes on the obtained third-order tensor data;
s7: and completing classification of the test samples by using a support vector machine.
2. An image classification method according to claim 1, wherein the selected module is a module with a sparse reconstruction error lower than the mean of the sparse reconstruction errors of all modules.
3. An image classification method according to claim 1, wherein the calculation of the unsupervised sparse reconstruction feature information comprises:
Figure FDA0002221222110000011
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data, x i Represents the I-th sample set, I represents the identity matrix, s= { S 1 ,s 2 ...,s n Sparse reconstruction weight mean matrix of modules of X, s i Represents x i Sparse reconstruction weight mean matrix of a module of samples.
4. An image classification method according to claim 1, wherein the supervised computation of pairwise constraint signature information comprises:
Figure FDA0002221222110000012
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data, x i Represents the i-th sample set, x j Represents the j-th sample set, ml= { (x) i ,x j )|x i and xj Belongs to the same class } represents a positive constraint set, cl= { (x) i ,x j )|x i and xj Not belonging to the same class } represents a negative constraint set, I represents an identity matrix, t= { T 1 ,t 2 ...,t n Constrained projection matrix denoted X, t i Represents x i Constrained projection of a sample set.
5. An image classification method according to claim 1, wherein said calculating by fusing two kinds of feature information in a linear manner comprises:
Figure FDA0002221222110000021
wherein ,
Figure FDA0002221222110000022
S α =S+S T -S T S,
βXX T +(1-β)I=1,
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data is represented by ml= { (x) i ,x j )|x i and xj Belongs to the same class } represents a positive constraint set, cl= { (x) i ,x j )|x i and xj Not belonging to the same class } represents a negative constraint set, I represents an identity matrix, t= { T 1 ,t 2 ...,t n Constrained projection matrix denoted X, t i Representing a constrained projection of the xi sample set, s= { S 1 ,s 2 ...,s n The sparse reconstruction weight mean matrix of the module of X is denoted, and β represents the linear weighting parameter.
6. An image classification method according to claim 1, wherein said adaptively setting of linear weighting parameters by genetic algorithm comprises:
binary chromosome coding is adopted for the linear weighting parameter beta population, and the selection, crossover and mutation of a genetic algorithm are utilized for next generation selection until a global optimal individual is obtained as the linear weighting parameter beta.
7. An image classification method according to claim 1, wherein said solving the tensor sparse preserving projection matrix using the obtained average as a sparse reconstruction weight of the entire dataset and applied to the obtained third-order tensor data comprises:
conversion to generalized eigen-problem solution:
(βS a +(1-β)P a )w=λ(βXX T +(1-β)I)w,
S α =S+S T -S T S,
wherein x= { X 1 ,x 2 ,x 3 ,...,x n The training sample data is represented by }, the identity matrix is represented by I, and s= { S 1 ,s 2 ...,s n Sparse reconstruction weights for modules representing XA mean matrix;
obtain projection matrix W= [ W ] 1 ,w 2 ,...,w d ]。
8. The method of claim 1, wherein the classifying test samples using a support vector machine comprises:
using projection matrix w= [ W ] 1 ,w 2 ,...,w d ]Projection data is calculated and classified using a support vector machine.
CN201910934378.1A 2019-09-29 2019-09-29 Image classification method Active CN110852345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910934378.1A CN110852345B (en) 2019-09-29 2019-09-29 Image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910934378.1A CN110852345B (en) 2019-09-29 2019-09-29 Image classification method

Publications (2)

Publication Number Publication Date
CN110852345A CN110852345A (en) 2020-02-28
CN110852345B true CN110852345B (en) 2023-06-09

Family

ID=69596359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910934378.1A Active CN110852345B (en) 2019-09-29 2019-09-29 Image classification method

Country Status (1)

Country Link
CN (1) CN110852345B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615026A (en) * 2018-12-28 2019-04-12 中国电子科技集团公司信息科学研究院 A kind of differentiation projecting method and pattern recognition device based on Sparse rules

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8494305B2 (en) * 2011-12-20 2013-07-23 Mitsubishi Electric Research Laboratories, Inc. Image filtering by sparse reconstruction on affinity net

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615026A (en) * 2018-12-28 2019-04-12 中国电子科技集团公司信息科学研究院 A kind of differentiation projecting method and pattern recognition device based on Sparse rules

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡鹏辉 ; 朱华平 ; 王春艳 ; .基于稀疏重构编码的图像检索算法.河南科技大学学报(自然科学版).2018,(03),全文. *

Also Published As

Publication number Publication date
CN110852345A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
Bifet et al. New ensemble methods for evolving data streams
Guan et al. A hybrid parallel cellular automata model for urban growth simulation over GPU/CPU heterogeneous architectures
Wang et al. Correlation aware multi-step ahead wind speed forecasting with heteroscedastic multi-kernel learning
CN111667022A (en) User data processing method and device, computer equipment and storage medium
Pannakkong et al. Hyperparameter Tuning of Machine Learning Algorithms Using Response Surface Methodology: A Case Study of ANN, SVM, and DBN.
CN110263151B (en) Latent semantic learning method for multi-view multi-label data
CN109858518B (en) Large data set clustering method based on MapReduce
CN112115998B (en) Method for overcoming catastrophic forgetting based on anti-incremental clustering dynamic routing network
CN115661550B (en) Graph data category unbalanced classification method and device based on generation of countermeasure network
CN113469236A (en) Deep clustering image recognition system and method for self-label learning
CN105184368A (en) Distributed extreme learning machine optimization integrated framework system and method
Zheng et al. A multi-task transfer learning method with dictionary learning
CN108364073A (en) A kind of Multi-label learning method
Amram et al. Denoising diffusion models with geometry adaptation for high fidelity calorimeter simulation
Wright et al. Help me to help you: machine augmented citizen science
CN117153268A (en) Cell category determining method and system
Bai et al. A deep neural network based on classification of traffic volume for short‐term forecasting
CN114880538A (en) Attribute graph community detection method based on self-supervision
Shi et al. Multi-label classification based on multi-objective optimization
CN110852345B (en) Image classification method
CN116245259A (en) Photovoltaic power generation prediction method and device based on depth feature selection and electronic equipment
CN116050119A (en) Positive and negative graph segmentation multi-view clustering method based on binary representation
CN114265954B (en) Graph representation learning method based on position and structure information
CN116108127A (en) Document level event extraction method based on heterogeneous graph interaction and mask multi-head attention mechanism
JP5623344B2 (en) Reduced feature generation apparatus, method, program, model construction apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant