CN106203487A - A kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device - Google Patents

A kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device Download PDF

Info

Publication number
CN106203487A
CN106203487A CN201610510844.XA CN201610510844A CN106203487A CN 106203487 A CN106203487 A CN 106203487A CN 201610510844 A CN201610510844 A CN 201610510844A CN 106203487 A CN106203487 A CN 106203487A
Authority
CN
China
Prior art keywords
classifier
samples
training
kernel
classifiers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610510844.XA
Other languages
Chinese (zh)
Inventor
李妮
怀文卿
龚光红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201610510844.XA priority Critical patent/CN106203487A/en
Publication of CN106203487A publication Critical patent/CN106203487A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device, described method includes: S1, set up Sample Storehouse, and described Sample Storehouse includes different types of sample;S2, different types of sample is carried out respectively feature extraction, obtain according to feature extraction result and distinguish the most corresponding kernel function with dissimilar sample;S3, by S2 obtain kernel function synthesize, set up many nuclear models;S4, described multinuclear model training is used to obtain multiple grader;S5, the multiple graders using Adaboost algorithm to obtain S4 training give different weights, so that multiple grader merges, obtain object classifiers;S6, utilize described object classifiers that image to be sorted is classified, obtain classification results.Image classification method of the present invention can improve the accuracy of image classification.

Description

Image classification method and device based on fusion of multi-core learning classifiers
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image classification method and device based on multi-core learning classifier fusion.
Background
Image processing plays an increasingly important part in people's daily lives, where image classification has very important applications in image processing. The method comprises terrain detection, face recognition, tumor diagnosis, image retrieval in the Internet field and the like.
The Support Vector Machine (SVM) is a general learning algorithm based on the Structure Risk Minimization (SRM) principle, and its basic idea is to construct an optimal hyperplane in a sample input space or feature space, so that the distance between the hyperplane and two types of sample sets is maximized, thereby obtaining the best generalization capability. His solution is globally optimal and does not require manual design of the network structure. For non-linear problems, SVMs seek to transform it into a linear problem in another space by a non-linear transformation (kernel function), where the optimal linear classification surface is solved. And such a non-linear transformation can be implemented by defining an appropriate inner product function, i.e. a kernel function.
The kernel function is to map features from a low-dimensional space to a high-dimensional space, but SVMs frequently used at present are single-kernel (single kernel) SVMs, and when the SVMs are used, which kernel function is used and how to specify parameters thereof needs to be selected according to experience or experiments, which is inconvenient. On the other hand, in practical applications, features are often not unique, but rather heterogeneous. For image classification, color-related features, texture-related features, and spatial-related features may be used, and the optimal kernel functions corresponding to these features are not necessarily the same, so that they share the same kernel function and may not obtain the optimal mapping, that is, a more accurate classification result may not be obtained.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an image classification method and device, which can improve the accuracy of image classification.
In a first aspect, the present invention provides an image classification method, including the following steps:
s1, establishing a sample library, wherein the sample library comprises samples of different types;
s2, respectively extracting the features of the samples of different types, and acquiring kernel functions respectively corresponding to the samples of different types according to the feature extraction result;
s3, synthesizing the kernel functions obtained in the S2, and establishing a multi-kernel model;
s4, training by using the multi-core model to obtain a plurality of classifiers;
s5, endowing different weights to the plurality of classifiers obtained by training in the S4 by adopting an Adaboost algorithm so as to fuse the plurality of classifiers to obtain a target classifier;
and S6, classifying the images to be classified by using the target classifier to obtain a classification result.
Preferably, the S3 synthesizing the kernel functions obtained in S2 to build a multi-kernel model, including:
k ( x , z ) = Σ j = 1 M β j k ^ j ( x , z ) , β j ≥ 0 , Σ j = 1 M β j = 1
wherein k (x, z) is a multi-nuclear model;kernel function obtained for S2, x, z are training data, M is total number of kernel functions, βjJ is 1,2, …, and M is the synthesis coefficient of each kernel function.
preferably, the synthesis coefficients β of the respective kernel functionsjThe synthesis coefficients are obtained by adopting an Nystrom approximation algorithm.
Preferably, the S5 assigns different weights to the multiple classifiers obtained by training in S4 by using the Adaboost algorithm, so that the multiple classifiers are fused to obtain the target classifier, including:
s51, selecting n learning samples (x)1,y1),…,(xn,yn);
S52, extracting a plurality of characteristics of the sample library by integrating the existing sample extraction method;
s53, training a sample by using the multi-core model to obtain a plurality of MKL weak classifiers corresponding to the characteristics;
s54, initializing the weight of all training samples to be 1/N, wherein N is the number of the samples;
s55, performing the following loop iteration to update the weight; wherein, loop iteration is performed for M times:
a. training weak classifier hm() To minimize it the weight error function:
is the error weight of the nth classifier in the mth cycle;
b. computing the speaking weight α of the weak classifier:
α m = l n { 1 - ϵ m ϵ m }
c. updating the weight:
ω m + 1 , i = ω m , i Z m exp ( - α m y i h m ( x i ) ) , i = 1 , 2 , ... N
wherein Z ismIs a normalization factor, making the sum of all ω 1; omegam+1,iIs the weight error of the ith classifier in m +1 cycles; wherein,
s56, obtaining a final binary classifier model:
Y M ( x ) = s i g n ( Σ m = 1 M α m h m ( x ) )
the binary classifier model is a target classifier;
correspondingly, S6 classifies the image to be classified by using the target classifier, and obtaining the classification result includes:
the obtained binary classifier model is used for expanding to solve the multi-class problem by using a one-to-many method one-summary-rest; and recording samples of a certain category as one type, uniformly considering the rest as another type, obtaining binary classifiers with the number corresponding to the number of the sample types, and classifying the samples into a type with a larger test output value to obtain a classification result.
In a second aspect, the present invention further provides an image classification apparatus, including:
the system comprises an establishing unit, a processing unit and a processing unit, wherein the establishing unit is used for establishing a sample library which comprises different types of samples;
the characteristic extraction unit is used for respectively extracting the characteristics of the samples of different types and acquiring kernel functions respectively corresponding to the samples of different types according to the characteristic extraction result;
the synthesis unit is used for synthesizing the kernel functions acquired by the feature extraction unit and establishing a multi-kernel model;
the training unit is used for training the multi-core model to obtain a plurality of classifiers;
the fusion unit is used for endowing different weights to the plurality of classifiers obtained by training of the training unit by adopting an Adaboost algorithm so as to fuse the plurality of classifiers and obtain a target classifier;
and the classification unit is used for classifying the images to be classified by using the target classifier to obtain a classification result.
Preferably, the synthesizing unit is specifically configured to synthesize the kernel functions obtained by the feature extracting unit, and establish the following multi-kernel model:
k ( x , z ) = Σ j = 1 M β j k ^ j ( x , z ) , β j ≥ 0 , Σ j = 1 M β j = 1
wherein k (x, z) is a multi-nuclear model;kernel function obtained for S2, x, z are training data, M is total number of kernel functions, βjJ is 1,2, …, and M is the synthesis coefficient of each kernel function.
preferably, the synthesis coefficients β of the respective kernel functionsjThe synthesis coefficients are obtained by adopting an Nystrom approximation algorithm.
Preferably, the fusion unit is specifically configured to:
selecting n learning samples (x)1,y1),…,(xn,yn);
Extracting a plurality of characteristics of a sample library by integrating the existing sample extraction method;
training a sample by using a multi-core model to obtain a plurality of MKL weak classifiers corresponding to the characteristics;
initializing the weight of all training samples to be 1/N, wherein N is the number of the samples;
performing the following loop iteration to update the weights; wherein, loop iteration is performed for M times:
a. training weak classifier hm() To minimize it the weight error function:
ϵ m = Σ n = 1 N ω n ( m ) | h m ( x n ) - y n |
ωn (m)is the error weight of the nth classifier in the mth cycle;
b. computing the speaking weight α of the weak classifier:
α m = l n { 1 - ϵ m ϵ m }
c. updating the weight:
ω m + 1 , i = ω m , i Z m exp ( - α m y i h m ( x i ) ) , i = 1 , 2 , ... N
wherein Z ismIs a normalization factor, making the sum of all ω 1; omegam+1,iIs the weight error of the ith classifier in m +1 cycles; wherein,
obtaining a final binary classifier model:
Y M ( x ) = s i g n ( Σ m = 1 M α m h m ( x ) )
the binary classifier model is a target classifier;
correspondingly, the classification unit is specifically used for utilizing the obtained binary classifier model to solve the multi-class problem by using one-to-many one-turn-rest expansion; and recording samples of a certain category as one type, uniformly considering the rest as another type, obtaining binary classifiers with the number corresponding to the number of the sample types, and classifying the samples into a type with a larger test output value to obtain a classification result.
According to the technical scheme, the image classification method based on the MKL-MKB (multi-core learning-classifier fusion) comprehensively uses the heterogeneous characteristics of the image, and characterizes the image to the maximum extent; the construction of the classifier is carried out by applying the synthesized multi-core model in the kernel space, so that the algorithm has universal applicability, and heterogeneous characteristics can be taken care of due to the fusion of each kernel function, so that the classification effect is greatly improved. In addition, the weight proportion of each classifier obtained by training the multi-core model is adjusted through an Adaboost algorithm, the speaking weight of a weak classifier is weakened, and the weight of a strong classifier is improved, so that the accuracy of image classification is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of an image classification method based on multi-kernel learning classifier fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image classification system of MKL-MKB based on the image classification method based on multi-kernel learning classifier fusion according to a first embodiment of the present invention;
FIG. 3 is a flowchart of an image classification algorithm based on an image classification method based on multi-kernel learning classifier fusion according to an embodiment of the present invention;
FIG. 4 is a flowchart of the algorithm of step 105;
fig. 5 is a schematic structural diagram of an image classification apparatus based on multi-kernel learning classifier fusion according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the technical problems mentioned in the background art, the invention provides an image classification method based on fusion of a multi-core learning classifier and an MKL-MKB.
Fig. 1 is a flowchart illustrating an image classification method based on multi-kernel learning classifier fusion according to an embodiment of the present invention, and referring to fig. 1, the method includes the following steps:
step 101: establishing a sample library, wherein the sample library comprises different types of samples.
In this step, a sample library is created from the samples, the samples are labeled according to their different categories, and each category is labeled.
Step 102: and respectively extracting the characteristics of the samples of different types, and acquiring kernel functions respectively corresponding to the samples of different types according to the characteristic extraction result.
In this step, feature extraction is performed on each type of sample in the sample library, such as wavelet color features, Gabor, GLCM texture features, and the like. These representative features are used to express an image.
For example, for the remote sensing terrain image, feature extraction is respectively carried out on different types of samples in the established sample library, and the results of the feature extraction are respectively wavelet color features, Gabor features and GLCM features. Then, a polynomial kernel function corresponding to the wavelet color feature, a radial basis function corresponding to the Gabor feature, and a sigmoid function corresponding to the GLCM feature are respectively obtained according to the result of the feature extraction.
Wherein the polynomial kernel function is:
Κ(x,z)=[xΤz+1]q
the radial basis function is:
K ( x , z ) = exp ( - | x - z | 2 σ 2 ) ;
the sigmoid function is:
Κ(x,z)=tanh(v(xΤz)+c);
wherein x, z are training data; σ, q is a constant; v is a scale; c is a displacement parameter.
The linear combinatorial synthesis kernel described above is used as described below. Assuming that k (x, z) is a known kernel function,is his normalized form. Wherein,
k ^ ( x , z ) = k ( x , z ) k ( x , z ) .
step 103: and synthesizing the kernel functions obtained by the step 102 to establish a multi-core model.
in this step, kernel synthesis is performed on kernel space using kernel functions corresponding to different featuresjThe kernel function is expressed as a linear combination thereof. Specifically, synthesizing the kernel functions obtained by 102, and establishing a multi-core model, includes:
k ( x , z ) = Σ j = 1 M β j k ^ j ( x , z ) , β j ≥ 0 , Σ j = 1 M β j = 1
wherein k (x, z) is a multi-nuclear model;kernel function obtained for S2, x, z are training data, M is total number of kernel functions, βjJ is 1,2, …, and M is the synthesis coefficient of each kernel function.
Aiming at the existing multi-sum learning classifier efficiency problem, the Nystrom approximation algorithm is introduced into the process of fusing a plurality of kernel functions, so that the aim of improving the efficiency is fulfilled. The core of the idea is to separate the determination process of the kernel combination coefficient from the algorithm frame of the classifier, so that the kernel combination coefficient is determined by the Nystrom approximation algorithm, and the combined final kernel matrix is participated in the classifier frame, thereby effectively reducing the waste of space, and the special property of the Nystrom approximation algorithm greatly reduces the final calculation complexity.
Nystrom approximation is one of the most popular "low rank matrix approximation" algorithms at present. It randomly selects m (m) from the matrix K<n) columns and then using the m columns and the corresponding rows to construct two matrices, W e Rm×mand C ∈ Rn×m. By rearranging the rows and columns of the matrix, one can obtain:
C = W S K = W S T S T
next, Singular Value Decomposition (SVD) is performed on the matrix W to obtain W ═ uusmuTwhere U is an orthogonal matrix consisting of eigenvectors of W, and Λ ═ s1,…,sm) Is a diagonal matrix composed of the singular values of W. Then the K-rank of matrix K (K ≦ rank (W)). The Nystrom approximation matrix may be defined as:
K ~ = CW k + C T
wherein,a k-rank pseudo-inverse called matrix W, k (k ≦ rank (W)) being a randomly selected representation matrix Wk +Value of rank of (1), siIs the ith singular value, U(i)Is the ith column of the matrix U.
for the above process of building a multi-core model, it can be understood that, in brief, given some basic kernel functions, such as linear kernel, Polynomial kernel, RBF kernel, Sigmoid kernel, etc., for each one, a plurality of sets of parameters can be specified, i.e., a total of M basic kernel functions, and their linear combination is intended as the final kernel functionj(weight). Due to the fact that each kernel function is fused, heterogeneous characteristics can be taken care of; due to the automatic learning of the weights, it is not necessary to consider which kernel and which parameter to use in combination with the possible kernels, parameters. As described above, the present embodiment expresses the features using wavelets, Gabor, GLCM, and the like.
The multi-core model finally needs to be combined by calculating the linear combination of a plurality of core matrixes, and the traditional multi-core learning classifier is used for solving an optimization problem by combining the problem and a corresponding classification algorithm. However, a very complex derivation is required in the process of converting the kernel combination coefficients into the optimization problem, and in addition, since the process of determining the kernel combination coefficients is integrated into the final optimization problem, a plurality of kernel matrices need to be simultaneously stored in a memory and participate in the operation from beginning to end, which causes space waste to a certain extent.
In conclusion, the Nystrom approximation algorithm is utilized to separate the process of calculating the kernel combination coefficient from the algorithm of the classifier, so that the space utilization rate of the computer is improved, and the calculation complexity is reduced.
Step 104: and training by using the multi-core model to obtain a plurality of classifiers.
Step 105: and (3) endowing different weights to the plurality of classifiers obtained by training in the step 104 by adopting an Adaboost algorithm so as to fuse the plurality of classifiers to obtain the target classifier.
In the step, the Adaboost algorithm is adopted to fuse the multi-core model in the decision layer, so that the precision of the classifier is effectively improved.
Specifically, the Adaboost algorithm is used to assign different weights to the multiple classifiers obtained by training in step 104, so that the multiple classifiers are fused to obtain a target classifier, which is shown in fig. 4 and includes:
s51, selecting n learning samples (x)1,y1),…,(xn,yn);
S52, extracting a plurality of characteristics of the sample library by integrating the existing sample extraction method;
s53, training a sample by using the multi-core model to obtain a plurality of MKL weak classifiers corresponding to the characteristics;
s54, initializing the weight of all training samples to be 1/N, wherein N is the number of the samples;
s55, performing the following loop iteration to update the weight; wherein, loop iteration is performed for M times:
a. training weak classifier hm() To minimize it the weight error function:
is the error weight of the nth classifier in the mth cycle;
b. computing the speaking weight α of the weak classifier:
&alpha; m = l n { 1 - &epsiv; m &epsiv; m }
c. updating the weight:
&omega; m + 1 , i = &omega; m , i Z m exp ( - &alpha; m y i h m ( x i ) ) , i = 1 , 2 , ... N
wherein Z ismIs a normalization factor, making the sum of all ω 1; omegam+1,iIs the weight error of the ith classifier in m +1 cycles; wherein,
s56, obtaining a final binary classifier model:
Y M ( x ) = s i g n ( &Sigma; m = 1 M &alpha; m h m ( x ) )
the binary classifier model is a target classifier;
correspondingly, in step 106, classifying the image to be classified by using the target classifier, and obtaining a classification result includes:
the obtained binary classifier model is used for expanding to solve the multi-class problem by using a one-to-many method one-summary-rest; and recording samples of a certain category as one type, uniformly considering the rest as another type, obtaining binary classifiers with the number corresponding to the number of the sample types, and classifying the samples into a type with a larger test output value to obtain a classification result. For example, for fig. 2, 6 samples are included, and the 6 classes result in 6 binary classifiers, and finally the samples are classified into the class with the larger test output value.
Step 106: and classifying the images to be classified by using the target classifier to obtain a classification result.
The image classification method based on the MKL-MKB (multi-core learning-classifier fusion) provided by the embodiment comprehensively uses heterogeneous features of the image, and characterizes the image to the maximum extent; the construction of the classifier is carried out by applying the synthesized multi-core model in the kernel space, so that the algorithm has universal applicability, and heterogeneous characteristics can be taken care of due to the fusion of each kernel function, so that the classification effect is greatly improved. In addition, the weight proportion of each classifier obtained by training the multi-core model is adjusted through an Adaboost algorithm, the speaking weight of a weak classifier is weakened, and the weight of a strong classifier is improved, so that the accuracy of image classification is further improved.
The image classification method based on MKL-MKB (multi-kernel learning-classifier fusion) provided in the first embodiment is described in detail below with reference to the accompanying drawings. The following description will take remote sensing image classification as an example.
Aiming at the characteristics of remote sensing terrain images, by combining the extraction of characteristics such as wavelets, Gabor, GLCM and the like, an image classification system based on MKL-MKB (multi-core learning-classifier fusion) is provided, and a schematic diagram is shown in FIG. 2. Firstly, a series of sample libraries are established according to samples to be classified, each class is divided into a test set and a training set, and features of the test set and the training set are extracted. And selecting a kernel function for each feature to carry out SVM classifier classification, and obtaining a classification result. And then, carrying out multi-core processing on each SVM classifier to obtain a group of new classifiers SVM. And finally, performing decision layer fusion on the new classifiers, and obtaining a strong classifier by redistributing the weight of each classifier before so as to finish the classification of the images. The algorithm flow chart is shown in figure 3.
The effect of the image classification method provided by the invention is analyzed by combining the experimental results. The sample set used in the present invention contains 4435 training samples and 2000 test samples. The sample sets are divided into six types, and the characteristic dimension is 36 dimensions. The results of the experiment are shown in table 1 below:
TABLE 1 Algorithm comparison experiment results
The experimental result shows that the MKL-MKB algorithm provided by the invention has greatly improved image classification precision compared with other methods, and the classification effect is obviously enhanced.
In order to prove that the image classification method provided by the invention has universal applicability, the invention uses five UCI data sets to test the image classification performance of the algorithm. The data set information is shown in table 2 below:
table 2 UCI data set information
The test training was distributed in a 1:4 ratio and 10 experiments were performed per data set to avoid contingency, and the results statistics are shown in table 3 below.
Table 3 UCI data set experimental results
It can be seen that the algorithm also achieves better effects in the above samples, which confirms the applicability of the algorithm proposed by the present invention.
In summary, the image classification method provided by the invention has the advantages that:
(1) the invention provides an image classification algorithm based on MKL-MKB (multi-core learning-classifier fusion), which comprehensively uses heterogeneous features of an image and characterizes the image to the maximum extent;
(2) in the algorithm, the construction of the classifier is carried out by applying the synthetic kernel in the kernel space, so that the algorithm has universal applicability;
(3) in the algorithm, the Nystrom approximation algorithm is utilized to separate the process of calculating the kernel combination coefficient from the algorithm of the classifier, so that the space utilization rate of a computer is improved, and the calculation complexity is reduced;
(4) in the algorithm, the Adaboost algorithm is used for fusing the multi-core model in a decision layer, so that the precision of the classifier is effectively improved.
An embodiment of the present invention provides an image classification device based on fusion of multi-core learning classifiers, referring to fig. 5, the device includes: the device comprises a building unit 51, a feature extraction unit 52, a synthesis unit 53, a training unit 54, a fusion unit 55 and a classification unit 56;
the establishing unit 51 is configured to establish a sample library, where the sample library includes different types of samples;
a feature extraction unit 52, configured to perform feature extraction on different types of samples, and obtain kernel functions corresponding to the different types of samples according to a feature extraction result;
a synthesizing unit 53, configured to synthesize the kernel functions obtained by the feature extracting unit, and establish a multi-kernel model;
a training unit 54, configured to train the multi-core model to obtain multiple classifiers;
the fusion unit 55 is configured to apply different weights to the multiple classifiers obtained by training in the training unit by using an Adaboost algorithm, so that the multiple classifiers are fused to obtain a target classifier;
and the classifying unit 56 is configured to classify the image to be classified by using the target classifier, and obtain a classification result.
Preferably, the synthesizing unit is specifically configured to synthesize the kernel functions obtained by the feature extracting unit, and establish the following multi-kernel model:
k ( x , z ) = &Sigma; j = 1 M &beta; j k ^ j ( x , z ) , &beta; j &GreaterEqual; 0 , &Sigma; j = 1 M &beta; j = 1
wherein k (x, z) is a multi-nuclear model;kernel function obtained for S2, x, z are training data, M is total number of kernel functions, βjJ is 1,2, …, and M is the synthesis coefficient of each kernel function.
preferably, the synthesis coefficients β of the respective kernel functionsjThe synthesis coefficients are obtained by adopting an Nystrom approximation algorithm.
Preferably, the fusion unit 55 is specifically configured to:
selecting n learning samples (x)1,y1),…,(xn,yn);
Extracting a plurality of characteristics of a sample library by integrating the existing sample extraction method;
training a sample by using a multi-core model to obtain a plurality of MKL weak classifiers corresponding to the characteristics;
initializing the weight of all training samples to be 1/N, wherein N is the number of the samples;
performing the following loop iteration to update the weights; wherein, loop iteration is performed for M times:
a. training weak classifier hm() To minimize it the weight error function:
is the error weight of the nth classifier in the mth cycle;
b. computing the speaking weight α of the weak classifier:
&alpha; m = l n { 1 - &epsiv; m &epsiv; m }
c. updating the weight:
&omega; m + 1 , i = &omega; m , i Z m exp ( - &alpha; m y i h m ( x i ) ) , i = 1 , 2 , ... N
wherein Z ismIs a normalization factor, making the sum of all ω 1; omegam+1,iIs the weight error of the ith classifier in m +1 cycles; wherein,
obtaining a final binary classifier model:
Y M ( x ) = s i g n ( &Sigma; m = 1 M &alpha; m h m ( x ) )
the binary classifier model is a target classifier;
correspondingly, the classifying unit 56 is specifically configured to utilize the obtained binary classifier model to apply one-to-many method-one-summary-rest to expand to solve the multiclass problem; and recording samples of a certain category as one type, uniformly considering the rest as another type, obtaining binary classifiers with the number corresponding to the number of the sample types, and classifying the samples into a type with a larger test output value to obtain a classification result.
The image classification apparatus provided in this embodiment may be used to execute the image classification method described in the above embodiments, and the principle and the technical effect are similar, which are not described herein again.
In the description of the present invention, it is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above examples are only for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. An image classification method based on multi-core learning classifier fusion is characterized by comprising the following steps:
s1, establishing a sample library, wherein the sample library comprises samples of different types;
s2, respectively extracting the features of the samples of different types, and acquiring kernel functions respectively corresponding to the samples of different types according to the feature extraction result;
s3, synthesizing the kernel functions obtained in the S2, and establishing a multi-kernel model;
s4, training by using the multi-core model to obtain a plurality of classifiers;
s5, endowing different weights to the plurality of classifiers obtained by training in the S4 by adopting an Adaboost algorithm so as to fuse the plurality of classifiers to obtain a target classifier;
and S6, classifying the images to be classified by using the target classifier to obtain a classification result.
2. The method according to claim 1, wherein the S3 is configured to synthesize the kernel functions obtained in S2 to build a multi-kernel model, and includes:
k ( x , z ) = &Sigma; j = 1 M &beta; j k ^ j ( x , z ) , &beta; j &GreaterEqual; 0 , &Sigma; j = 1 M &beta; j = 1
wherein k (x, z) is a multi-nuclear model;kernel function obtained for S2, x, z are training data, M is total number of kernel functions, βjJ is 1,2, …, and M is the synthesis coefficient of each kernel function.
3. the method of claim 2, wherein the synthesis coefficients β of the respective kernel functionsjThe synthesis coefficients are obtained by adopting an Nystrom approximation algorithm.
4. The method according to claim 1, wherein the S5 assigns different weights to the plurality of classifiers obtained by training in S4 by using an Adaboost algorithm, so that the plurality of classifiers are fused to obtain a target classifier, and the method includes:
s51, selecting n learning samples (x)1,y1),…,(xn,yn);
S52, extracting a plurality of characteristics of the sample library by integrating the existing sample extraction method;
s53, training a sample by using the multi-core model to obtain a plurality of MKL weak classifiers corresponding to the characteristics;
s54, initializing the weight of all training samples to be 1/N, wherein N is the number of the samples;
s55, performing the following loop iteration to update the weight; wherein, loop iteration is performed for M times:
a. training weak classifier hm() To minimize it the weight error function:
ωn (m)is the error weight of the nth classifier in the mth cycle;
b. computing the speaking weight α of the weak classifier:
&alpha; m = l n { 1 - &epsiv; m &epsiv; m }
c. updating the weight:
&omega; m + 1 , i = &omega; m , i Z m exp ( - &alpha; m y i h m ( x i ) ) , i = 1 , 2 , ... N
wherein Z ismIs a normalization factor, making the sum of all ω 1; omegam+1,iIs the weight error of the ith classifier in m +1 cycles; wherein,
s56, obtaining a final binary classifier model:
Y M ( x ) = s i g n ( &Sigma; m = 1 M &alpha; m h m ( x ) )
the binary classifier model is a target classifier;
correspondingly, S6 classifies the image to be classified by using the target classifier, and obtaining the classification result includes:
the obtained binary classifier model is used for expanding to solve the multi-class problem by using a one-to-many method one-summary-rest; and recording samples of a certain category as one type, uniformly considering the rest as another type, obtaining binary classifiers with the number corresponding to the number of the sample types, and classifying the samples into a type with a larger test output value to obtain a classification result.
5. An image classification device based on multi-kernel learning classifier fusion is characterized by comprising:
the system comprises an establishing unit, a processing unit and a processing unit, wherein the establishing unit is used for establishing a sample library which comprises different types of samples;
the characteristic extraction unit is used for respectively extracting the characteristics of the samples of different types and acquiring kernel functions respectively corresponding to the samples of different types according to the characteristic extraction result;
the synthesis unit is used for synthesizing the kernel functions acquired by the feature extraction unit and establishing a multi-kernel model;
the training unit is used for training the multi-core model to obtain a plurality of classifiers;
the fusion unit is used for endowing different weights to the plurality of classifiers obtained by training of the training unit by adopting an Adaboost algorithm so as to fuse the plurality of classifiers and obtain a target classifier;
and the classification unit is used for classifying the images to be classified by using the target classifier to obtain a classification result.
6. The apparatus according to claim 5, wherein the synthesizing unit is specifically configured to synthesize the kernel functions obtained by the feature extracting unit, and build the following multi-core model:
k ( x , z ) = &Sigma; j = 1 M &beta; j k ^ j ( x , z ) , &beta; j &GreaterEqual; 0 , &Sigma; j = 1 M &beta; j = 1
wherein k (x, z) is a multi-nuclear model;kernel function obtained for S2, x, z are training data, M is total number of kernel functions, βjJ is 1,2, …, and M is the synthesis coefficient of each kernel function.
7. the apparatus of claim 6, wherein the synthesis coefficients β of the respective kernel functionsjThe synthesis coefficients are obtained by adopting an Nystrom approximation algorithm.
8. The device according to claim 5, wherein the fusion unit is specifically configured to:
selecting n studentsTraining sample (x)1,y1),…,(xn,yn);
Extracting a plurality of characteristics of a sample library by integrating the existing sample extraction method;
training a sample by using a multi-core model to obtain a plurality of MKL weak classifiers corresponding to the characteristics;
initializing the weight of all training samples to be 1/N, wherein N is the number of the samples;
performing the following loop iteration to update the weights; wherein, loop iteration is performed for M times:
a. training weak classifier hm() To minimize it the weight error function:
ωn (m)is the error weight of the nth classifier in the mth cycle;
b. computing the speaking weight α of the weak classifier:
&alpha; m = l n { 1 - &epsiv; m &epsiv; m }
c. updating the weight:
&omega; m + 1 , i = &omega; m , i Z m exp ( - &alpha; m y i h m ( x i ) ) , i = 1 , 2 , ... N
wherein Z ismIs a normalization factor, making the sum of all ω 1; omegam+1,iIs the weight error of the ith classifier in m +1 cycles; wherein,
obtaining a final binary classifier model:
Y M ( x ) = s i g n ( &Sigma; m = 1 M &alpha; m h m ( x ) ) ;
the binary classifier model is a target classifier;
correspondingly, the classification unit is specifically used for utilizing the obtained binary classifier model to solve the multi-class problem by using one-to-many one-turn-rest expansion; and recording samples of a certain category as one type, uniformly considering the rest as another type, obtaining binary classifiers with the number corresponding to the number of the sample types, and classifying the samples into a type with a larger test output value to obtain a classification result.
CN201610510844.XA 2016-06-30 2016-06-30 A kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device Pending CN106203487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610510844.XA CN106203487A (en) 2016-06-30 2016-06-30 A kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610510844.XA CN106203487A (en) 2016-06-30 2016-06-30 A kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device

Publications (1)

Publication Number Publication Date
CN106203487A true CN106203487A (en) 2016-12-07

Family

ID=57462959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610510844.XA Pending CN106203487A (en) 2016-06-30 2016-06-30 A kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device

Country Status (1)

Country Link
CN (1) CN106203487A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491734A (en) * 2017-07-19 2017-12-19 苏州闻捷传感技术有限公司 Semi-supervised Classification of Polarimetric SAR Image method based on multi-core integration Yu space W ishart LapSVM
CN107665349A (en) * 2016-07-29 2018-02-06 腾讯科技(深圳)有限公司 The training method and device of multiple targets in a kind of disaggregated model
CN108198324A (en) * 2018-02-08 2018-06-22 中南大学 A kind of multinational bank note currency type recognition methods based on finger image
CN108320374A (en) * 2018-02-08 2018-07-24 中南大学 A kind of multinational paper money number character identifying method based on finger image
CN108564569A (en) * 2018-03-23 2018-09-21 石家庄铁道大学 A kind of distress in concrete detection method and device based on multinuclear classification learning
CN108596154A (en) * 2018-05-14 2018-09-28 河海大学 Classifying Method in Remote Sensing Image based on high dimensional feature selection and multi-level fusion
CN108694404A (en) * 2017-04-06 2018-10-23 同方威视技术股份有限公司 The method and device of the examination of cargo is carried out based on radiation image
CN108742627A (en) * 2018-06-25 2018-11-06 重庆知遨科技有限公司 A kind of detection device based on the classification of brain Medical image fusion
CN109145933A (en) * 2017-06-28 2019-01-04 腾讯科技(深圳)有限公司 The classifier training method and device of media resource
CN110059775A (en) * 2019-05-22 2019-07-26 湃方科技(北京)有限责任公司 Rotary-type mechanical equipment method for detecting abnormality and device
CN110110742A (en) * 2019-03-26 2019-08-09 北京达佳互联信息技术有限公司 Multiple features fusion method, apparatus, electronic equipment and storage medium
CN110378380A (en) * 2019-06-17 2019-10-25 江苏大学 A kind of image classification method based on the study of multicore Ensemble classifier
CN111178418A (en) * 2019-12-23 2020-05-19 东软集团股份有限公司 Image classification method and device, storage medium and electronic equipment
CN111582655A (en) * 2020-04-14 2020-08-25 广东卓维网络有限公司 Power utilization system based on multi-user schedulable potential analysis
CN114669508A (en) * 2022-03-01 2022-06-28 常州大学 Screening method for graded utilization monomers of retired batteries
CN115661680A (en) * 2022-11-15 2023-01-31 北京轨道未来空间科技有限公司 Satellite remote sensing image processing method
CN116206166A (en) * 2023-05-05 2023-06-02 西南科技大学 Data dimension reduction method, device and medium based on kernel projection learning
US12046023B2 (en) 2019-03-22 2024-07-23 International Business Machines Corporation Unification of models having respective target classes with distillation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314614A (en) * 2011-10-24 2012-01-11 北京大学 Image semantics classification method based on class-shared multiple kernel learning (MKL)
CN102855496A (en) * 2012-08-24 2013-01-02 苏州大学 Method and system for authenticating shielded face
CN103186776A (en) * 2013-04-03 2013-07-03 西安电子科技大学 Human detection method based on multiple features and depth information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314614A (en) * 2011-10-24 2012-01-11 北京大学 Image semantics classification method based on class-shared multiple kernel learning (MKL)
CN102855496A (en) * 2012-08-24 2013-01-02 苏州大学 Method and system for authenticating shielded face
CN103186776A (en) * 2013-04-03 2013-07-03 西安电子科技大学 Human detection method based on multiple features and depth information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
F. ZOU等: "Multi-view multi-label learning for image annotation", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *
HUMAN TRACKING BY MULTIPLE KERNEL BOOSTING WITH LOCALITY AFFINIT: "Human tracking by multiple kernel boosting with locality affinity constraints", 《COMPUTER VISION-ACCV 2010》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665349A (en) * 2016-07-29 2018-02-06 腾讯科技(深圳)有限公司 The training method and device of multiple targets in a kind of disaggregated model
CN107665349B (en) * 2016-07-29 2020-12-04 腾讯科技(深圳)有限公司 Training method and device for multiple targets in classification model
CN108694404A (en) * 2017-04-06 2018-10-23 同方威视技术股份有限公司 The method and device of the examination of cargo is carried out based on radiation image
CN109145933B (en) * 2017-06-28 2022-02-08 腾讯科技(深圳)有限公司 Classifier training method and device for media resources
CN109145933A (en) * 2017-06-28 2019-01-04 腾讯科技(深圳)有限公司 The classifier training method and device of media resource
CN107491734A (en) * 2017-07-19 2017-12-19 苏州闻捷传感技术有限公司 Semi-supervised Classification of Polarimetric SAR Image method based on multi-core integration Yu space W ishart LapSVM
CN107491734B (en) * 2017-07-19 2021-05-07 苏州闻捷传感技术有限公司 Semi-supervised polarimetric SAR image classification method based on multi-core fusion and space Wishart LapSVM
CN108320374A (en) * 2018-02-08 2018-07-24 中南大学 A kind of multinational paper money number character identifying method based on finger image
CN108198324A (en) * 2018-02-08 2018-06-22 中南大学 A kind of multinational bank note currency type recognition methods based on finger image
CN108198324B (en) * 2018-02-08 2019-11-08 中南大学 A kind of multinational bank note currency type recognition methods based on finger image
CN108564569A (en) * 2018-03-23 2018-09-21 石家庄铁道大学 A kind of distress in concrete detection method and device based on multinuclear classification learning
CN108564569B (en) * 2018-03-23 2019-11-26 石家庄铁道大学 A kind of distress in concrete detection method and device based on multicore classification learning
CN108596154A (en) * 2018-05-14 2018-09-28 河海大学 Classifying Method in Remote Sensing Image based on high dimensional feature selection and multi-level fusion
CN108596154B (en) * 2018-05-14 2021-09-24 河海大学 Remote sensing image classification method based on high-dimensional feature selection and multilevel fusion
CN108742627A (en) * 2018-06-25 2018-11-06 重庆知遨科技有限公司 A kind of detection device based on the classification of brain Medical image fusion
US12046023B2 (en) 2019-03-22 2024-07-23 International Business Machines Corporation Unification of models having respective target classes with distillation
CN110110742A (en) * 2019-03-26 2019-08-09 北京达佳互联信息技术有限公司 Multiple features fusion method, apparatus, electronic equipment and storage medium
CN110110742B (en) * 2019-03-26 2021-02-12 北京达佳互联信息技术有限公司 Multi-feature fusion method and device, electronic equipment and storage medium
CN110059775A (en) * 2019-05-22 2019-07-26 湃方科技(北京)有限责任公司 Rotary-type mechanical equipment method for detecting abnormality and device
CN110378380A (en) * 2019-06-17 2019-10-25 江苏大学 A kind of image classification method based on the study of multicore Ensemble classifier
CN110378380B (en) * 2019-06-17 2023-09-29 江苏大学 Image classification method based on multi-core integrated classification learning
CN111178418B (en) * 2019-12-23 2023-07-25 东软集团股份有限公司 Image classification method and device, storage medium and electronic equipment
CN111178418A (en) * 2019-12-23 2020-05-19 东软集团股份有限公司 Image classification method and device, storage medium and electronic equipment
CN111582655B (en) * 2020-04-14 2021-07-30 广东卓维网络有限公司 Power utilization system based on multi-user schedulable potential analysis
CN111582655A (en) * 2020-04-14 2020-08-25 广东卓维网络有限公司 Power utilization system based on multi-user schedulable potential analysis
CN114669508A (en) * 2022-03-01 2022-06-28 常州大学 Screening method for graded utilization monomers of retired batteries
CN115661680A (en) * 2022-11-15 2023-01-31 北京轨道未来空间科技有限公司 Satellite remote sensing image processing method
CN116206166A (en) * 2023-05-05 2023-06-02 西南科技大学 Data dimension reduction method, device and medium based on kernel projection learning
CN116206166B (en) * 2023-05-05 2023-08-11 西南科技大学 Data dimension reduction method, device and medium based on kernel projection learning

Similar Documents

Publication Publication Date Title
CN106203487A (en) A kind of image classification method based on Multiple Kernel Learning Multiple Classifier Fusion and device
Kuo et al. On data-driven saak transform
CN109063724B (en) Enhanced generation type countermeasure network and target sample identification method
CN108108751B (en) Scene recognition method based on convolution multi-feature and deep random forest
CN105740912B (en) The recognition methods and system of low-rank image characteristics extraction based on nuclear norm regularization
US9330332B2 (en) Fast computation of kernel descriptors
CN109919252B (en) Method for generating classifier by using few labeled images
US20110293173A1 (en) Object Detection Using Combinations of Relational Features in Images
JP2015052832A (en) Weight setting device and method
JP2004152297A (en) Method and system for integrating multiple cue
JP7225731B2 (en) Imaging multivariable data sequences
CN108268890A (en) A kind of hyperspectral image classification method
CN110348287A (en) A kind of unsupervised feature selection approach and device based on dictionary and sample similar diagram
CN109508640A (en) Crowd emotion analysis method and device and storage medium
CN114937173A (en) Hyperspectral image rapid classification method based on dynamic graph convolution network
Pichel et al. A new approach for sparse matrix classification based on deep learning techniques
Perbet et al. Random Forest Clustering and Application to Video Segmentation.
CN113065520A (en) Multi-modal data-oriented remote sensing image classification method
Song et al. Dictionary reduction: Automatic compact dictionary learning for classification
Tariyal et al. Greedy deep dictionary learning for hyperspectral image classification
CN111371611A (en) Weighted network community discovery method and device based on deep learning
CN116152645A (en) Indoor scene visual recognition method and system integrating multiple characterization balance strategies
Sotiropoulos Handling variable shaped & high resolution images for multi-class classification problem
Ju et al. A novel neutrosophic logic svm (n-svm) and its application to image categorization
CN109002832B (en) Image identification method based on hierarchical feature extraction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161207

RJ01 Rejection of invention patent application after publication