CN113449779A - SVM increment learning method for improving KKT condition based on sample distribution density - Google Patents

SVM increment learning method for improving KKT condition based on sample distribution density Download PDF

Info

Publication number
CN113449779A
CN113449779A CN202110652246.7A CN202110652246A CN113449779A CN 113449779 A CN113449779 A CN 113449779A CN 202110652246 A CN202110652246 A CN 202110652246A CN 113449779 A CN113449779 A CN 113449779A
Authority
CN
China
Prior art keywords
sample
kkt condition
classifier
model
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110652246.7A
Other languages
Chinese (zh)
Other versions
CN113449779B (en
Inventor
王彩云
吴钇达
李阳雨
丁牧恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110652246.7A priority Critical patent/CN113449779B/en
Publication of CN113449779A publication Critical patent/CN113449779A/en
Application granted granted Critical
Publication of CN113449779B publication Critical patent/CN113449779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an SVM increment learning method based on sample distribution density improvement KKT condition, which comprises the following steps: obtaining support vector set SV of classifier0And standard KKT condition i for classifier; constructing Model of SVM classifieroldModified KKT condition I; judging whether the samples in the newly added sample set B meet the Model of the SVM classifier or notoldStandard KKT condition i of (1); carrying out secondary judgment on the samples in the newly added sample set B to judge whether the samples meet the Model of the SVM classifieroldModified KKT condition I; support of a set of vectors SV by candidates1Model for training classifier1(ii) a Model of calculation satisfaction classifier1Modified KKT condition ii; by SV0∪SV1∪SVaddSet training classifier and output updated classifier Model2. The KKT condition is improved based on the sample distribution density, so that the newly added samples can be subjected to unbalanced distributionAnd (4) effect screening is performed, and the generalization capability of the SVM incremental learning algorithm is improved.

Description

SVM increment learning method for improving KKT condition based on sample distribution density
Technical Field
The invention belongs to the technical field of machine learning, and relates to an SVM increment learning method for improving a KKT condition based on sample distribution density. Particularly, the method can be used for online updating of the SVM classifier under automatic incremental learning aiming at SVM incremental learning under the condition of uneven sample distribution.
Background
Support Vector Machines (SVMs) are a Machine learning pattern recognition classification algorithm proposed by Vapnik in the 90 s of the 20 th century, reference [ Vapnik v. statistical learning the new York: John Wiley & Sons, inc.,1998], which perform well in the task of classification of small-sample, high-dimensional features. The traditional SVM algorithm is a batch learning mode, i.e. assuming that all training samples are available at one time before training, the learning process is terminated after training is completed. However, in practical applications, the training samples are not always obtained all at once, but are obtained gradually over time, and the information contained in the newly added samples changes over time. Therefore, the classifier needs to have the ability to continuously learn useful knowledge from these sample data, so as to realize online update of the classifier under the condition of adding new samples.
How to learn useful knowledge from newly added sample data and ensure that a model updated through learning training has better classification performance becomes a key problem to be solved. This problem can be solved by incremental learning to retain important information of the newly added samples. The idea of incremental learning is summarized as follows: on the basis of the original knowledge base, the original knowledge base is updated only according to the change caused by the newly added data. This will save training time and memory requirements after new sample data is added to a large extent.
The SVM incremental learning algorithm proposed by Syed et al is an early more classical algorithm, and is referred to by the reference [ Syed N a, Liu H, Sun K. incorporated learning with a supported vector machines. proc. int Joint Conference on intelligent Intelligence,1999 ]. The algorithm does not screen newly added samples, and some newly added samples without gain effect on classification precision are also trained, so that the efficiency of incremental learning is poor and the classification precision is influenced. Researchers then introduce Karush-Kuhn-Tucker (KKT) conditions to screen new samples, and an SVM increment learning algorithm based on the KKT conditions is provided. In 2014, Zhang Lin and the like introduce a new sample error point driving idea on the basis of KKT conditions, and provide a new SVM increment learning algorithm, and references [ Zhang Lin, Yao Minghai, Tonglong, and the like. Although the SVM increment learning algorithm based on the KKT condition realizes the screening of the newly added samples, the generalization capability is poor. Under the condition of uneven distribution of newly added samples, the existing SVM incremental learning method based on the KKT condition cannot carry out self-adaptive screening on the newly added samples with obviously inconsistent distribution density, and under the condition of uneven distribution of the samples, the classification accuracy of the SVM classifier updated by the methods is obviously lower.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide an SVM increment learning method based on a sample distribution density improved KKT condition, so as to solve the problems that an SVM increment learning algorithm in the prior art is poor in learning capacity on newly added samples under the condition of unbalanced sample distribution, low in generalization capacity of a classifier and low in classification precision.
The method can preselect a possible support vector set SV for updating the classifier by judging the KKT condition of a newly added sample, but the rule of analyzing the number of non-support vectors near a classification boundary under different sample distribution densities shows that the larger the sample distribution density is, the more the non-support vectors near the classification boundary are, the more samples which can be converted into the support vectors of the new classifier are, the lower the sample distribution density is, the relatively fewer the non-support vectors near the classification boundary are, and the fewer the samples which can be converted into the support vectors of the new classifier are, if a bias parameter with a fixed value is introduced to improve the KKT condition, under the condition of unbalanced sample distribution, the learning degree of the newly added sample in the sample set with the low sample density is obviously insufficient compared with the learning degree of the newly added sample in the sample set with the high sample density.
On the basis of an SVM increment learning method based on a KKT condition, the KKT condition is improved by adaptively calculating a bias parameter through sample distribution density, a candidate support vector set SV can be automatically screened from a positive sample set and a negative sample set in a newly-added sample by adopting the improved KKT condition, sample learning of low sample density is improved, sample learning of high sample density is balanced, and rapid increment learning of the newly-added sample is realized under the condition of fully utilizing a historical training result, so that the generalization capability and the classification precision of a classifier are improved.
The invention discloses an SVM increment learning method based on sample distribution density improvement KKT condition, which comprises the following steps:
1) training SVM classifier Model according to original sample set AoldObtaining the classifier ModeloldSupport vector set SV of0And standard KKT condition I;
2) calculating the sample distribution density of positive and negative samples in the original sample set A, calculating the bias parameters under the positive and negative samples according to the sample distribution density, adding the bias parameters on the basis of the standard KKT condition I, and constructing a Model of the SVM classifieroldThe KKT condition I is improved for adaptive optimization of positive and negative samples;
3) judging whether all the samples in the newly added sample set B meet the Model of the SVM classifier or notoldIf all the standard KKT conditions I are met, the original SVM classifier Model is outputoldThe model is the needed model, and the process is finished; otherwise, putting the samples which violate the standard KKT condition I in the newly added sample set B into a set B1, and putting the samples which meet the standard KKT condition I into a set B2;
4) judging whether the samples in the newly added sample set B meet the Model of the SVM classifier or notoldThe improved KKT condition I, putting the sample meeting the improved KKT condition I into a candidate support vector set SV1Defined as possible support vector samples;
5) according to candidate support vector set SV1Model for training classifier1Obtaining the classifier Model1Standard KKT condition II, using support vector set SV1Calculating bias parameters under positive and negative samples according to the sample distribution density to obtain an improved KKT condition II after the bias parameters are increased;
6) judging whether the set B2 meets the improved KKT condition II, and if the set B2 is empty or all samples meet the improved KKT condition II, outputtingThe updated Model is the classifier Model1And ending; otherwise, putting the sample not meeting the improved KKT condition II into the supplementary support vector set SVadd
7) By SV0∪SV1∪SVaddModel of set training classifier2And outputs the updated classifier as a Model2,SV0Set of support vectors, SV, for original classifieraddTo supplement the support vector set.
Further, training a Model of the SVM classifier according to the original sample set in the step 1)oldThe LIBSVM tool box is adopted in the process, and the standard KKT condition I is represented as the following formula:
Figure BDA0003112054640000031
in the solving process of the optimal hyperplane optimal solution of the SVM classifier, the definition condition of alpha is more than or equal to 0ι< X, standard KKT condition reduced to yif(xi) Less than or equal to 1; wherein, C is a penalty coefficient, and alpha is (alpha)12,...,αn)TFor Lagrange multipliers, T represents the transpose of the matrix, yi∈[1,-1]Is a sample tag, f (x)i) Is a sample xiDistance from the optimal hyperplane.
Further, in the step 2), bias parameters under the positive and negative samples are calculated according to the sample distribution density, and a classifier Model is constructedoldThe method for improving the KKT condition I specifically comprises the following steps:
21) assume original training set
Figure BDA0003112054640000032
Is a set of original samples, positive samples are
Figure BDA0003112054640000033
Negative sample is
Figure BDA0003112054640000034
The category centers of the positive and negative samples in the sample set are respectively c+And c-
Figure BDA0003112054640000035
Representing the distance between the positive and negative examples and the center of the category;
22) calculating the sample distribution density xi of the positive and negative sample sets+And xi-
Figure BDA0003112054640000036
23) Calculating the bias parameter delta under positive and negative samples+-
δ+=1-ξ+-=1-ξ-
24) Model of SVM classifieroldModified KKT condition i:
|yif(xi)|≤1+δ=1+(1-ξ),ξ∈[ξ+-]
in the formula, N+Represents the number of positive samples, N-Represents the number of negative samples, and max (·) represents the maximum value of the distance.
Further, in the step 3), for the newly added sample set B ═ s1,s2,...,sj,...,sMSample vector s injMake a judgment if sjIf all the conditions meet the standard KKT condition I, the classifier Model is outputoldAnd ending; otherwise, dividing the newly added sample set B into two sets of a set B1 and a set B2, and meeting the requirements of the SVM classifier ModeloldThe sample vectors of criterion KKT condition i are put into set B2, and the sample vectors that violate criterion KKT condition i are put into set B1.
Further, the step 5) adopts an LIBSVM tool box to support the vector set SV according to the candidate1Model for training classifier1Computing a set of candidate support vectors SV1Sample distribution densities of the positive and negative samples are calculated, bias parameters under the positive and negative samples are calculated according to the sample distribution densities, the bias parameters are increased on the basis of the standard KKT condition II, and an SVM classifier Model is constructed1The lower needle is aligned,And improving the KKT condition II by the adaptive optimization of the negative sample.
Further, the support vector set SV of the original classifier in the step 7)0Set of possible support vectors SV selected from the newly added sample set B1And a complementary support vector set SVaddMerging, and adopting LIBSVM tool box to collect SV0∪SV1∪SVaddModel for training classifier2The classifier Model2Is the classifier model sought.
The invention has the beneficial effects that:
1. the invention considers that the existing SVM increment learning method based on the improved KKT condition usually adopts fixed offset parameters to improve the KKT condition to realize the selection of the newly added sample, and does not consider the influence of special conditions that the sample distribution is obviously unbalanced or the sample set is small on the selection of the newly added sample, so that the classifier after increment learning updating has low classification precision.
2. The invention provides a new improved KKT condition SVM increment learning method, which introduces sample distribution density parameters to adaptively calculate the bias parameters of the improved KKT condition, so that the improved KKT condition can more effectively select a candidate support vector set SV from a newly-added sample under the condition of unbalanced sample distribution or smaller sample set, and the classification precision of a classifier is improved.
3. The invention improves the learning efficiency of the incremental sample, has less incremental learning times under the condition of obtaining the same target classification precision, and reduces the time consumption of online updating of the classifier.
Drawings
FIG. 1 is a schematic diagram of the method of the present invention.
Fig. 2 is a graph of the abalone data set increment 10 classification results ROC.
FIG. 3 is a ROC plot of the Waveform dataset increment by 10 classification results.
FIG. 4 is a graph of classification accuracy rate with increment times under a Waveform data set.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.
Referring to fig. 1, the SVM incremental learning method for improving the KKT condition based on the sample distribution density according to the present invention includes the following steps:
1) training SVM classifier Model according to original sample set AoldObtaining the classifier ModeloldSupport vector set SV of0And standard KKT condition I;
training a Model of the SVM classifier according to the original sample set in the step 1)oldThe LIBSVM tool box is adopted in the process, and the standard KKT condition I is represented as the following formula:
Figure BDA0003112054640000051
in the solving process of the optimal hyperplane optimal solution of the SVM classifier, the definition condition of alpha is more than or equal to 0ι< X, standard KKT condition reduced to yif(xi) Less than or equal to 1; wherein, C is a penalty coefficient, and alpha is (alpha)12,...,αn)TFor Lagrange multipliers, T represents the transpose of the matrix, yi∈[1,-1]Is a sample tag, f (x)i) Is a sample xiDistance from the optimal hyperplane.
2) Calculating the sample distribution density of positive and negative samples in the original sample set A, calculating the bias parameters under the positive and negative samples according to the sample distribution density, adding the bias parameters on the basis of the standard KKT condition I, and constructing a Model of the SVM classifieroldThe KKT condition I is improved for adaptive optimization of positive and negative samples;
calculating bias parameters of the positive and negative samples according to the sample distribution density in the step 2), and constructing a classifier ModeloldThe method for improving the KKT condition I specifically comprises the following steps:
21) assume original training set
Figure BDA0003112054640000052
For the original sample set, positive samplesIs composed of
Figure BDA0003112054640000053
Negative sample is
Figure BDA0003112054640000054
The category centers of the positive and negative samples in the sample set are respectively c+And c-
Figure BDA0003112054640000055
Representing the distance between the positive and negative examples and the center of the category;
22) calculating the sample distribution density xi of the positive and negative sample sets+And xi-
Figure BDA0003112054640000056
23) Calculating the bias parameter delta under positive and negative samples+-
δ+=1-ξ+-=1-ξ_
24) Model of SVM classifieroldModified KKT condition i:
|yif(xi)|≤1+δ=1+(1-ξ),ξ∈[ξ+_]
in the formula, N+Represents the number of positive samples, N_Represents the number of negative samples, and max (·) represents the maximum value of the distance.
3) Judging whether all the samples in the newly added sample set B meet the Model of the SVM classifier or notoldIf all the standard KKT conditions I are met, the original SVM classifier Model is outputoldThe model is the needed model, and the process is finished; otherwise, putting the samples which violate the standard KKT condition I in the newly added sample set B into a set B1, and putting the samples which meet the standard KKT condition I into a set B2;
in the step 3), for the newly added sample set B ═ s1,s2,...,sj,...,sMSample vector s injMake a judgment if sjAll satisfy the criterion KKT condition I, then output classificationModel for a deviceoldAnd ending; otherwise, dividing the newly added sample set B into two sets of a set B1 and a set B2, and meeting the requirements of the SVM classifier ModeloldThe sample vectors of criterion KKT condition i are put into set B2, and the sample vectors that violate criterion KKT condition i are put into set B1.
4) Judging whether the samples in the newly added sample set B meet the Model of the SVM classifier or notoldThe improved KKT condition I, putting the sample meeting the improved KKT condition I into a candidate support vector set SV1Defined as possible support vector samples.
5) According to candidate support vector set SV1Model for training classifier1Obtaining the classifier Model1Standard KKT condition II, using support vector set SV1Calculating bias parameters under positive and negative samples according to the sample distribution density to obtain an improved KKT condition II after the bias parameters are increased;
the step 5) adopts an LIBSVM tool box to support the vector set SV according to the candidate1Model for training classifier1Computing a set of candidate support vectors SV1Sample distribution densities of the positive and negative samples are calculated, bias parameters under the positive and negative samples are calculated according to the sample distribution densities, the bias parameters are increased on the basis of the standard KKT condition II, and an SVM classifier Model is constructed1And (5) performing adaptive optimization on positive and negative samples to obtain an improved KKT condition II.
6) Judging whether the set B2 meets the improved KKT condition II, if the set B2 is empty or all samples meet the improved KKT condition II, outputting an updated Model, namely the classifier Model1And ending; otherwise, putting the sample not meeting the improved KKT condition II into the supplementary support vector set SVadd
7) By SV0∪SV1∪SVaddModel of set training classifier2And outputs the updated classifier as a Model2,SV0Set of support vectors, SV, for original classifieraddSupporting the vector set for supplement;
the support vector set SV of the original classifier in the step 7)0Set of possible support vectors SV selected from the newly added sample set B1And a complementary support vector set SVaddMerging, and adopting LIBSVM tool box to collect SV0∪SV1∪SVaddModel for training classifier2The classifier Model2Is the classifier model sought.
The data adopted by the embodiment of the invention are an ablone data set and a Waveform data set in a UCI standard classification data set. Wherein, the abalone classified data set contains 4177 instances, and the number of the attributes of each instance is 8. The Waveform classification dataset contains 5000 instances, each with 21 attributes. The early preparation work is as follows: 4177 cases in the abalone classification dataset were randomly aliquoted into 10 aliquots, with 1 being the original sample set, 1 being the test sample set, and 8 being the incremental sample set at 8 increments. The same segmentation method was applied to divide the Waveform classification dataset equally into 10. The SVM training function adopted in the working program adopts a libsvmtrain function of an LIBSVM tool box. Comparing the incremental learning SVM algorithm of the present invention with other methods, the simulation results are shown in FIGS. 2 and 3 below.
As shown in fig. 2 and fig. 3, a Receiver Operating characteristic Curve (ROC) Curve of classification results on UCI standard datasets abalone and wave form, wherein a True Positive Rate (TPR) of an ordinate of the ROC Curve indicates a ratio of all samples actually being Positive, which are correctly judged as Positive samples; a False Positive Rate (FPR) on the abscissa indicates a Rate of being erroneously determined as a Positive sample among all the samples that are actually negative; the Area Under the Curve is an Area Under Cutter (AUC) index, and the higher the AUC value, the better the performance of the classifier is. The algorithm is represented by an improved KKT-ISVM curve in the graph, and other two comparative methods are respectively an Incremental SVM algorithm (KKT-ISVM) based on KKT and an Incremental SVM algorithm (CRS-ISVM) based on a Combined Reserved Set. FIG. 4 shows the classification accuracy of different methods on a Waveform data set as a function of incremental learning times, wherein the higher the classification accuracy, the better the performance of the classifier.
According to ROC curves of multiple incremental studies under two simulation comparison experiments and comparison results of AUC indexes represented by areas under the curves, the ROC curves of the improved KKT incremental SVM algorithm (improved KKT-ISVM) in the four methods are closer to the upper left, the AUC indexes of the areas under the curves are the highest, the higher the AUC is, the higher the classification accuracy is, and the higher the classifier performance is, so that the method provided by the invention is effective and advanced in performance.
While the invention has been described in terms of its preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (6)

1. An SVM increment learning method based on a KKT condition improved by sample distribution density is characterized by comprising the following steps:
1) training SVM classifier Model according to original sample set AoldObtaining the classifier ModeloldSupport vector set SV of0And standard KKT condition I;
2) calculating the sample distribution density of positive and negative samples in the original sample set A, calculating the bias parameters under the positive and negative samples according to the sample distribution density, adding the bias parameters on the basis of the standard KKT condition I, and constructing a Model of the SVM classifieroldThe KKT condition I is improved for adaptive optimization of positive and negative samples;
3) judging whether all the samples in the newly added sample set B meet the Model of the SVM classifier or notoldIf all the standard KKT conditions I are met, the original SVM classifier Model is outputoldThe model is the needed model, and the process is finished; otherwise, putting the samples which violate the standard KKT condition I in the newly added sample set B into a set B1, and putting the samples which meet the standard KKT condition I into a set B2;
4) judging whether the samples in the newly added sample set B meet the Model of the SVM classifier or notoldThe improved KKT condition I, putting the sample meeting the improved KKT condition I into a candidate support vector set SV1Defined as possible support vector samples;
5) based on candidate supportVector set SV1Model for training classifier1Obtaining the classifier Model1Standard KKT condition II, using support vector set SV1Calculating bias parameters under positive and negative samples according to the sample distribution density to obtain an improved KKT condition II after the bias parameters are increased;
6) judging whether the set B2 meets the improved KKT condition II, if the set B2 is empty or all samples meet the improved KKT condition II, outputting an updated Model, namely the classifier Model1And ending; otherwise, putting the sample not meeting the improved KKT condition II into the supplementary support vector set SVadd
7) By SV0∪SV1∪SVaddModel of set training classifier2And outputs the updated classifier as a Model2,SV0Set of support vectors, SV, for original classifieraddTo supplement the support vector set.
2. The SVM incremental learning method based on sample distribution Density (DG) to improve KKT condition as claimed in claim 1, wherein the step 1) trains SVM classifier Model according to original sample setoldThe LIBSVM tool box is adopted in the process, and the standard KKT condition I is represented as the following formula:
Figure FDA0003112054630000011
in the solving process of the optimal hyperplane optimal solution of the SVM classifier, the definition condition of alpha is more than or equal to 0ι< X, standard KKT condition reduced to yif(xi) Less than or equal to 1; wherein, C is a penalty coefficient, and alpha is (alpha)12,...,αn)TFor Lagrange multipliers, T represents the transpose of the matrix, yi∈[1,-1]Is a sample tag, f (x)i) Is a sample xiDistance from the optimal hyperplane.
3. The SVM incremental learning method of claim 1 based on sample distribution density-improved KKT condition, characterized in thatIn the step 2), the bias parameters under the positive and negative samples are calculated according to the sample distribution density, and a classifier Model is constructedoldThe method for improving the KKT condition I specifically comprises the following steps:
21) assume original training set
Figure FDA0003112054630000021
Is a set of original samples, positive samples are
Figure FDA0003112054630000022
Negative sample is
Figure FDA0003112054630000023
The category centers of the positive and negative samples in the sample set are respectively c+And c-
Figure FDA0003112054630000024
And
Figure FDA0003112054630000025
representing the distance between the positive and negative examples and the center of the category;
22) calculating the sample distribution density xi of the positive and negative sample sets+And xi-
Figure FDA0003112054630000026
23) Calculating the bias parameter delta under positive and negative samples+-
δ+=1-ξ+-=1-ξ-
24) Model of SVM classifieroldModified KKT condition i:
|yif(xi)|≤1+δ=1+(1-ξ),ξ∈[ξ+-]
in the formula, N+Represents the number of positive samples, N-Represents the number of negative samples, and max (·) represents the maximum value of the distance.
4. The SVM incremental learning method for improving KKT condition based on sample distribution density as claimed in claim 1, wherein the step 3) comprises adding B ═ s to the newly added sample set1,s2,...,sj,...,sMSample vector s injMake a judgment if sjIf all the conditions meet the standard KKT condition I, the classifier Model is outputoldAnd ending; otherwise, dividing the newly added sample set B into two sets of a set B1 and a set B2, and meeting the requirements of the SVM classifier ModeloldThe sample vectors of criterion KKT condition i are put into set B2, and the sample vectors that violate criterion KKT condition i are put into set B1.
5. The SVM incremental learning method based on sample distribution density-improved KKT condition of claim 1, wherein the step 5) adopts LIBSVM tool box according to candidate support vector set SV1Model for training classifier1Computing a set of candidate support vectors SV1Sample distribution densities of the positive and negative samples are calculated, bias parameters under the positive and negative samples are calculated according to the sample distribution densities, the bias parameters are increased on the basis of the standard KKT condition II, and an SVM classifier Model is constructed1And (5) performing adaptive optimization on positive and negative samples to obtain an improved KKT condition II.
6. The SVM incremental learning method for improving KKT condition based on sample distribution density as recited in claim 1, wherein the step 7) comprises a support vector set SV of an original classifier0Set of possible support vectors SV selected from the newly added sample set B1And a complementary support vector set SVaddMerging, and adopting LIBSVM tool box to collect SV0∪SV1∪SVaddModel for training classifier2The classifier Model2Is the classifier model sought.
CN202110652246.7A 2021-06-11 2021-06-11 SVM incremental learning method based on sample distribution density improved KKT condition Active CN113449779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110652246.7A CN113449779B (en) 2021-06-11 2021-06-11 SVM incremental learning method based on sample distribution density improved KKT condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110652246.7A CN113449779B (en) 2021-06-11 2021-06-11 SVM incremental learning method based on sample distribution density improved KKT condition

Publications (2)

Publication Number Publication Date
CN113449779A true CN113449779A (en) 2021-09-28
CN113449779B CN113449779B (en) 2024-04-16

Family

ID=77811441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110652246.7A Active CN113449779B (en) 2021-06-11 2021-06-11 SVM incremental learning method based on sample distribution density improved KKT condition

Country Status (1)

Country Link
CN (1) CN113449779B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944122A (en) * 2010-09-17 2011-01-12 浙江工商大学 Incremental learning-fused support vector machine multi-class classification method
CN109190719A (en) * 2018-11-30 2019-01-11 长沙理工大学 Support vector machines learning method, device, equipment and computer readable storage medium
CN111160457A (en) * 2019-12-27 2020-05-15 南京航空航天大学 Turboshaft engine fault detection method based on soft class extreme learning machine
US10970650B1 (en) * 2020-05-18 2021-04-06 King Abdulaziz University AUC-maximized high-accuracy classifier for imbalanced datasets

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944122A (en) * 2010-09-17 2011-01-12 浙江工商大学 Incremental learning-fused support vector machine multi-class classification method
CN109190719A (en) * 2018-11-30 2019-01-11 长沙理工大学 Support vector machines learning method, device, equipment and computer readable storage medium
CN111160457A (en) * 2019-12-27 2020-05-15 南京航空航天大学 Turboshaft engine fault detection method based on soft class extreme learning machine
US10970650B1 (en) * 2020-05-18 2021-04-06 King Abdulaziz University AUC-maximized high-accuracy classifier for imbalanced datasets

Also Published As

Publication number Publication date
CN113449779B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
Ito et al. Optimizing support vector regression hyperparameters based on cross-validation
CN107392919B (en) Adaptive genetic algorithm-based gray threshold acquisition method and image segmentation method
CN101968853B (en) Improved immune algorithm based expression recognition method for optimizing support vector machine parameters
CN113326731A (en) Cross-domain pedestrian re-identification algorithm based on momentum network guidance
CN108062331A (en) Increment type naive Bayesian file classification method based on Lifelong Learning
CN109871872A (en) A kind of flow real-time grading method based on shell vector mode SVM incremental learning model
CN109961093A (en) A kind of image classification method based on many intelligence integrated studies
CN109656808B (en) Software defect prediction method based on hybrid active learning strategy
CN115578248B (en) Generalized enhanced image classification algorithm based on style guidance
CN116452862A (en) Image classification method based on domain generalization learning
Zhang et al. Dbiecm-an evolving clustering method for streaming data clustering
CN113449779A (en) SVM increment learning method for improving KKT condition based on sample distribution density
KR100869554B1 (en) Domain density description based incremental pattern classification method
CN111930484B (en) Power grid information communication server thread pool performance optimization method and system
Zhu et al. Joint learning of anchor graph-based fuzzy spectral embedding and fuzzy k-means
CN114972261A (en) Method for identifying surface quality defects of plate strip steel
CN114169542A (en) Integrated learning tree construction method for incomplete data classification
CN112785004A (en) Greenhouse intelligent decision-making method based on rough set theory and D-S evidence theory
CN113673581A (en) Method for generating confrontation sample of hard tag black box depth model and storage medium
Altinigneli et al. Hierarchical quick shift guided recurrent clustering
CN105825205A (en) Cooperative sparse representation self-adaptive rapid face recognition method
CN112308160A (en) K-means clustering artificial intelligence optimization algorithm
CN117975444B (en) Food material image recognition method for food crusher
Shi et al. Adaptive few-shot deep metric learning
Maoxiang et al. Steel Plate Surface Defects Classification Method using Multiple Hyper-planes Twin Support Vector Machine with Additional Information.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant