CN102013946A - Method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems - Google Patents

Method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems Download PDF

Info

Publication number
CN102013946A
CN102013946A CN2010105284177A CN201010528417A CN102013946A CN 102013946 A CN102013946 A CN 102013946A CN 2010105284177 A CN2010105284177 A CN 2010105284177A CN 201010528417 A CN201010528417 A CN 201010528417A CN 102013946 A CN102013946 A CN 102013946A
Authority
CN
China
Prior art keywords
svm
error correction
network
training
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010105284177A
Other languages
Chinese (zh)
Inventor
郭成安
赵泰洋
王成波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN2010105284177A priority Critical patent/CN102013946A/en
Publication of CN102013946A publication Critical patent/CN102013946A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Error Detection And Correction (AREA)

Abstract

The invention discloses a method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems and belongs to the technical field of pattern recognition and machine learning. The method is characterized by consisting of a training system of an error correction SVM network and a work system of the error correction SVM network, wherein the training system of the error correction SVM network consists of an encoder, a training sample set divider, and n SVM unit trainers; and the work system of the error correction SVM network consists of n SVM units which are trained by the training system of the error correction SVM network, and a decoder. A plurality of SVMs are effectively combined by an error correction coding algorithm in the digital communication to make the combined SVM network capable of correcting classification errors occurring in partial SVM units and reduce the quantity of the SVM units needing using so as to improve the comprehensive performance of the classification method.

Description

A kind of error correction svm classifier method that is used to solve many classification problems
Technical field
The invention belongs to pattern recognition and machine learning techniques field, relate to and a kind ofly utilize error correction coding algorithm in the digital communication a plurality of SVM (Support Vector Machine: SVMs) make up effectively, so that the SVM network after the combination has the ability of correcting the locality mistake that part SVM wherein taken place, thereby improve its whole classification performance.
Background technology
SVM (Support Vector Machine, SVMs) is a kind of optimum neural network classifier model based on the structural risk minimization criterion, has obtained extensive use in fields such as pattern recognition, machine learning.SVM itself puts forward at two classification problems, is optimum for solving two classification problems, but can not obtain the optimal classification result for solving many classification problems.In existing method,, a plurality of SVM unit solves many classification problems by being made up.But how making up, still do not have optimum method in theory at present, generally is to carry out suitable balance in the quantity of classification performance and the SVM unit that adopted between the two.Therefore, explore novel and efficient SVM combined method and have important significance for theories and using value.
At present, the research in the SVM field of neural networks mainly concentrates on: (1) is explored and is used different non-linear and functions, adopts different learning training algorithms; (2) in various Classification and Identification, intelligence computation or machine learning problem, use; (3) proposed some and be used for polytypic SVM combined method, wherein typical combined method has M-ary model, " one-to-many " model, reaches " one to one " model.Wherein, the needed SVM of M-ary model unit is minimum, only is
Figure BDA0000030413570000011
Individual (wherein M is the classification number), " one-to-many " model needs M SVM unit.In these two kinds of combined methods, when some SVM unit generation classification error wherein, then will cause whole final classification results to make a mistake, so its fault freedom is relatively poor.And owing in real data, always there are various errors, therefore have certain (or some) SVM unit unavoidably and make a mistake.For " one to one " model, the quantity of needed SVM unit is M 2Magnitude increases along with the needed SVM quantity of the increase for the treatment of number of categories is square-law, and this is unusual adverse factors for practical application.Though " one to one " model has certain fault freedom in some cases, in fact, this fault-tolerant ability is to exchange for by the quantity with the SVM unit of high power redundancy.This shows that if can work out better SVM combined method, it is very significant exchanging stronger fault-tolerance for less amount of redundancy.
Summary of the invention
The present invention proposes a kind of error correction svm classifier method that is used to solve many classification problems and realizes many classification features (being called error correction SVM network), be used to reduce required SVM unit redundant quantity, improve the fault-tolerant ability of whole combined system.
Technical scheme of the present invention is:
A kind of error correction svm classifier method (being designated hereinafter simply as error correction SVM network) that is used to solve many classification problems comprises the training system of error correction SVM network, the work system of error correction SVM network.In the training system of error correction SVM network, comprise encoder, training sample set divide device, a n SVM module training device (SVM module training device-i, i=1 ..., n); Comprise in the work system of error correction SVM network n SVM unit (SVM unit-i, i=1 ..., n), decoder.
The training system of error correction SVM network is used to train error correction SVM network, and this training process that is to say this error correction SVM Network Design process.In the training system of error correction SVM network, encoder be input as training sample feature vector set S F, in encoder, at first producing the binary code word of a cover n bit by certain specific encryption algorithm, each vector that the training sample characteristic vector of input is concentrated is all composed to a code word as its code name, has produced a cover characteristic of correspondence vector set S thus C, at S CIn each characteristic vector all have a code signal, with S CExport to training sample set and divide device, training sample set is divided device will be according to S CIn each characteristic vector with code signal with S CRepartition, generate n feature vector set S i(i=1 ..., n), and with i feature vector set S iExport to SVM module training device-i (i=1 ..., n).In SVM module training device-i, will train it according to existing SVM learning algorithm, obtain after the convergence exporting SVM unit-i (i=1 ..., n).All all carry out same training and operation in n SVM module training device, after all n SVM module training device all restrained, then finished the training design process of this error correction SVM network.
The work system of error correction SVM network is to be directly used in the actual characteristic vector of classification samples for the treatment of to carry out sort operation.In this system, n wherein SVM unit (SVM unit-i, i=1 ..., promptly be in the training system of error correction SVM network n) through n SVM unit after the resulting convergence behind the above-mentioned training and operation.The input of each SVM unit all is certain sample characteristics vector X to be classified simultaneously in this system f, i SVM unit (SVM unit-i, i=1 ..., n) to X fCalculate, produce output b i(i=1 ..., n), give decoder.In decoder, at first use n b i(i=1 ..., the n) binary code word B=(b of a n bit of formation 1b 2... b n), adopt the decoding algorithm that matches with encoder that B is decoded then, be output as at last through the decoded result Y after the error correction computing dSo, by Y dThe classification of representative is this error correction SVM network to input sample vector X fClassification results.
Effect of the present invention and benefit are: in the assembled scheme that the error correction/encoding method in the digital communication is incorporated into two classification SVM unit, SVM network after the combination can be resembled have the EDC error detection and correction ability the current state-of-the-art digital communication system, also smaller (the quantity that is less than the needed SVM of " one to one " model unit far away of the while needed SVM element number of this SVM network, even also be less than the quantity of the needed SVM of " one-to-many " model unit), and its error correcting capability and SVM element number can also design in advance and put according to the needs of concrete application problem fixed.
Description of drawings
Accompanying drawing is the structural representation of error correction SVM network, and wherein Fig. 1 is the training system of error correction SVM network, and Fig. 2 is the work system of error correction SVM network.
Embodiment
Be described in detail specific embodiments of the invention below in conjunction with technical scheme and accompanying drawing.
At first move the training system of error correction SVM network shown in Figure 1, the embodiment step is as follows:
Step 1: at a M class classification problem to be solved, at first select two suitable integer value n and l, it is satisfied Wherein n is the number of the SVM unit that will use,
Figure BDA0000030413570000042
Be expressed as log 2The supremum integer of M, l is for determining the error correction number that this error correction SVM network will reach by the designer.
Step 2: according to the Coding Theory in the digital communication, selecting a suitable encryption algorithm can make its smallest hamming distance is 2l+1, recommends to use BCH code (Bose, Chaudhuri, Hocquenghem Code) in the present invention.In encoder, produce a cover binary code word (being designated as { C (k) }) with this encryption algorithm, and with the training sample feature vector set S that imports FIn each vector all compose to a different code word C (k) as its code name, produce a cover characteristic of correspondence vector set S thus C, with S CExport to training sample set and divide device.
Step 3: divide in the device, according to S at training sample set CIn each characteristic vector with encoded radio with S CRepartition, generate n feature vector set S i(i=1 ..., n), the specific practice of this step is: for each S i, all be to constitute, promptly by two subclass
Figure BDA0000030413570000043
Wherein
Figure BDA0000030413570000044
Be by S CThe i position bit value of the coding C (k) of middle characteristic vector is whole characteristic vectors formations of 0,
Figure BDA0000030413570000045
Be by S CThe i position bit value of the coding C (k) of middle characteristic vector is whole characteristic vectors formations of 1.Whole n S iAll according to said method constitute (i=1 ..., n).Then with i feature vector set S iExport to SVM module training device-i (i=1 ..., n).
Step 4: in SVM module training device-i, will use feature vector set S according to existing SVM learning algorithm (for example adopting SMO (Sequential Minimal Optimization) learning algorithm) iAs training sample it is trained, the SVM model that obtains after the convergence promptly as its output SVM unit-i (i=1 ..., n).All all carry out same training and operation in n SVM module training device, all after the convergence, then finishing the training process of this error correction svm classifier device, also finishing its design process thus.
Following step is the work system of operation error correction SVM network shown in Figure 2.
Step 5: beginning from this step is with training (or design) good error correction svm classifier device to realize sort operation.In the work system (Fig. 2) of error correction SVM network, used n SVM unit (SVM unit-i, i=1, ..., n) promptly be n output SVM unit-i (i=1 after the step 1 of associating arrives the resulting convergence of training and operation of step 4 in the training system (Fig. 1) of error correction svm classifier device, ..., n).If X fBe certain sample characteristics vector to be classified, it is inputed to whole n SVM unit simultaneously.
Step 6:SVM unit-i is to X fCalculate, produce output b i(i=1 ..., n), give decoder.In this step, all n SVM unit all carries out same calculating operation.
Step 7: in decoder, at first use n b i(i=1 ..., the n) binary code word B=(b of a n bit of formation 1b 2... b n), adopt with the corresponding decoding algorithm of encoder (for example adopting the BCH decoding algorithm) then B is decoded, the decoded result Y after the output process error correction computing dSo, by Y dThe classification of representative is the work system of this error correction SVM network to input sample vector X fClassification results.

Claims (1)

1. an error correction svm classifier method that is used to solve many classification problems comprises the training system of error correction SVM network, the work system of error correction SVM network; In the training system of error correction SVM network, comprise encoder, training sample set division device, a n SVM module training device; Comprise n SVM unit, decoder in the work system of error correction SVM network; It is characterized in that:
The training system of error correction SVM network is used to train error correction SVM network, finishes this error correction SVM Network Design process; The work system of error correction SVM network is used for finishing the sort operation process, and this system is n SVM unit training of the training system by error correction SVM network and constitutes with identical network configuration.In the training system of error correction SVM network, according to the Coding Theory in the digital communication, select a encryption algorithm to produce a cover n bit-binary code word with error correcting capability, concentrate each vector all to compose the training sample characteristic vector of input to a code word, produce a cover characteristic of correspondence vector set S thus as its code name C, training sample set divide device according to this code signal with S CRepartition, generate n feature vector set S iIn SVM module training device-i with S iFor training sample is trained with the SVM learning algorithm, all after the convergence, finish the training process of this error correction SVM network; In the work system of error correction SVM network, used n SVM unit is through training and restraining resulting n SVM unit, back and constitute network with identical structure in the training system of error correction SVM network; For certain sample characteristics vector X to be classified f, i SVM unit in this system is directly to X fCalculate, produce output b i, give decoder.In decoder, at first use n b iConstitute the binary code word B=(b of a n bit 1b 2... b n), adopt the decoding algorithm that matches with encoder that B is decoded then, output is through the decoded result Y after the error correction computing dSo, by Y dThe classification of representative promptly as this sorting technique to input sample vector X fClassification results.
CN2010105284177A 2010-11-01 2010-11-01 Method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems Pending CN102013946A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105284177A CN102013946A (en) 2010-11-01 2010-11-01 Method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105284177A CN102013946A (en) 2010-11-01 2010-11-01 Method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems

Publications (1)

Publication Number Publication Date
CN102013946A true CN102013946A (en) 2011-04-13

Family

ID=43843979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105284177A Pending CN102013946A (en) 2010-11-01 2010-11-01 Method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems

Country Status (1)

Country Link
CN (1) CN102013946A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722726A (en) * 2012-06-05 2012-10-10 江苏省电力公司南京供电公司 Multi-class support vector machine classification method based on dynamic binary tree

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187986A (en) * 2007-11-27 2008-05-28 海信集团有限公司 Face recognition method based on supervisory neighbour keeping inlaying and supporting vector machine
CN101599126A (en) * 2009-04-22 2009-12-09 哈尔滨工业大学 Utilize the support vector machine classifier of overall intercommunication weighting

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187986A (en) * 2007-11-27 2008-05-28 海信集团有限公司 Face recognition method based on supervisory neighbour keeping inlaying and supporting vector machine
CN101599126A (en) * 2009-04-22 2009-12-09 哈尔滨工业大学 Utilize the support vector machine classifier of overall intercommunication weighting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
万卫兵等: "《智能视频监控中目标检测与识别》", 31 January 2010, 上海交通大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722726A (en) * 2012-06-05 2012-10-10 江苏省电力公司南京供电公司 Multi-class support vector machine classification method based on dynamic binary tree
CN102722726B (en) * 2012-06-05 2014-01-15 江苏省电力公司南京供电公司 Multi-class support vector machine classification method based on dynamic binary tree

Similar Documents

Publication Publication Date Title
Scrucca GA: A package for genetic algorithms in R
CN105512289A (en) Image retrieval method based on deep learning and Hash
CN103858433B (en) Layered entropy encoding and decoding
Ozfatura et al. Distributed gradient descent with coded partial gradient computations
CN103929210B (en) Hard decision decoding method based on genetic algorithm and neural network
CN102164025A (en) Coder based on repeated coding and channel polarization and coding/decoding method thereof
CN106778700A (en) One kind is based on change constituent encoder Chinese Sign Language recognition methods
CN104268077A (en) Chaos genetic algorithm based test case intensive simple algorithm
Zhou et al. Octr: Octree-based transformer for 3d object detection
CN114880538A (en) Attribute graph community detection method based on self-supervision
Yuan et al. A novel hard decision decoding scheme based on genetic algorithm and neural network
CN202475439U (en) Hardware simulation verification platform based on configurable QC-LDPC coding and decoding algorithm
CN103023515A (en) Block column circulation based LDPC (low-density parity-check) encoder and block column circulation based LDPC encoding method in CMMB (China mobile multimedia broadcasting)
CN102013946A (en) Method for correcting errors of support vector machine (SVM) classification for solving multi-classification problems
CN107437976A (en) A kind of data processing method and equipment
Erbin et al. Deep Learning: Complete Intersection Calabi–Yau Manifolds
Liu et al. Deep product quantization module for efficient image retrieval
CN103605631A (en) Increment learning method on the basis of supporting vector geometrical significance
CN109450460A (en) A kind of parameter identification method of RS code and the concatenated code of convolutional code
CN110730006B (en) LDPC code error correction method and error correction module for MCU
CN110413647B (en) High-dimensional vector unequal length sequence similarity rapid calculation system
Berezkin et al. Models and methods for decoding of error-correcting codes based on a neural network
CN107665152A (en) The interpretation method of a kind of correcting and eleting codes
KR102115216B1 (en) Polar codes decoding device and method thereof
CN109151054B (en) Construction method of hierarchical code and repair method of fault node

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110413