CN105046236A - Iterative tag noise recognition algorithm based on multiple voting - Google Patents

Iterative tag noise recognition algorithm based on multiple voting Download PDF

Info

Publication number
CN105046236A
CN105046236A CN201510490699.9A CN201510490699A CN105046236A CN 105046236 A CN105046236 A CN 105046236A CN 201510490699 A CN201510490699 A CN 201510490699A CN 105046236 A CN105046236 A CN 105046236A
Authority
CN
China
Prior art keywords
noise
sample
voting
iteration
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510490699.9A
Other languages
Chinese (zh)
Inventor
关东海
袁伟伟
李博涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201510490699.9A priority Critical patent/CN105046236A/en
Publication of CN105046236A publication Critical patent/CN105046236A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses an iterative tag noise recognition algorithm based on multiple voting, and belongs to the field of machine learning and data mining. The recognition algorithm is based on integrated learning thoughts; a multiple voting mode is adopted; the noise is judged through the voting of a plurality of classifiers; before the voting in each time, the sequence of samples to be detected is randomly disturbed, so that a group of noise recognition results with possible differences can be generated during the voting in each time; and finally, multiple groups of results generated by multiple voting are integrated to generate the final noise recognition result. The recognition algorithm also adopts an iterative mode, and the samples to be detected input during iteration in each time are residual samples subjected to noise filtering during the iteration in the last time. Compared with the conventional single voting mode, a multiple voting method has the advantages that flexibility and accuracy are higher; the results of the single voting result can be summarized in the other level; and the requirements of different types of data and noise ratios can be met. The iterative recognition method can be used for comprehensively and thoroughly recognize all noise data.

Description

A kind of iterative label noise identification algorithm based on repeatedly voting
Technical field
The present invention relates to data mining and machine learning techniques field, specifically based on the iterative label noise identification algorithm of repeatedly voting.
Background technology
A lot of training datas that machine learning uses in the middle of practical application are all that band is noisy, and the reason wherein caused comprises artificial mistake, the mistake of hardware device, the mistake etc. of data-gathering process.Traditional way is exactly before those machine learning algorithms of application, by manually carrying out data prediction work to source data, obtain pure source data, but, these artificial job difficult, loaded down with trivial details, consuming time, and the total correctness of data can not be ensured, this causes very important impact to follow-up algorithm application.Data noise generally includes two classes: attribute noise and classification noise, and it is inaccurate that attribute noise refers to sample attribute value, and classification noise refers to the label inaccurate [1] of sample.Compared to attribute noise, the impact of classification noise is larger.
The disposal route of classification noise is comprised: the algorithm [2,3] that design is healthy and strong and noise detection algorithm [4,5,6,7].The algorithm of design stalwartness mainly improves existing algorithm, makes existing algorithm less by the impact of classification noise.And noise detection algorithm detected and erased noise before use comprises the data of noise.By contrast, the effect of noise like detection algorithm and versatility stronger.
Existing noise like detection algorithm mainly comprises two classes: based on k neighbour [4] with based on integrated study [5,6,7].Basic thought based on k neighbour is the class label comparing a sample and its neighbours' sample, if these labels are obviously inconsistent, then thinks that this sample label is noise.This method affects by the limitation of k nearest neighbor algorithm, and not all Data distribution8 is all applicable to the method based on k next-door neighbour.By contrast, the algorithm based on integrated study uses more extensive.The representative of this class algorithm is that great majority filter and consistance filters [7].In these algorithms, first training data is divided into multiple subset by random, and then each subset can be carried out walkaway by independent.The basic thought detected is the ballot by taking residuary subset as multiple sorters that training sample obtains.This kind of algorithm mainly comprises two steps: sample divides and multi-categorizer ballot.Because sample divides and multi-categorizer ballot Exactly-once, therefore belong to the label noise detecting method based on single ballot.
There are two deficiencies in the existing label noise detecting method based on single ballot: the impact that the result of single ballot divides by sample is comparatively large, and the possibility of omitting noise is larger.
List of references:
[1]Zhu,Xingquan,andXindongWu."Classnoisevs.attributenoise:Aquantitativestudy."ArtificialIntelligenceReview22.3(2004):177-210.
[2]J.Bootkrajang,A.Kaban,Classificationofmislabelledmicroarraysusingrobustsparselogisticregression,Bioinformatics29(7)(2013)870–877.
[3]J.Saez,M.Galar,J.Luengo,F.Herrera,Afirststudyondecompositionstrategieswithdatawithclassnoiseusingdecisiontrees,in:HybridArtificialIntelligentSystems,LectureNotesinComputerScience,vol.7209,2012,pp.25–35.
[4]D.L.Wilson,Asymptoticpropertiesofnearestneighborrulesusingediteddata,IEEETrans.Syst.ManCybernet.2(3)(1992)431–433.
[5]J.Young,J.Ashburner,S.Ourselin,Wrappermethodstocorrectmislabeledtrainingdata,in:3rdInternationalWorkshoponPatternRecognitioninNeuroimaging,2013,pp.170–173.
[6]D.Guan,W.Yuan,etal.,Identifyingmislabeledtrainingdatawiththeaidofunlabeleddata,Appl.Intell.35(3)(2011)345–358.
[7]C.E.Brodley,M.A.Friedl,Identifyingmislabeledtrainingdata,J.Artif.Intell.Res.11(1999)131–167.
Summary of the invention
The problem to be solved in the present invention is to provide a kind of iterative label noise identification algorithm based on repeatedly voting, this algorithm the method adopts the mode of repeatedly voting, repeatedly ballot and single vote link corresponding parameter and strategy can be set according to real data collection situation, the problem that the impact that the result avoiding single ballot is divided by sample is larger, effectively can improve the accuracy rate of identification, by the mode of iteration, noise data can be found more thoroughly.
As the iterative label noise identification algorithm based on repeatedly voting disclosed by the invention, comprise the following steps:
Step 1) determine algorithm input variable, comprise pending sample set D, maximum iteration time maxIter, repeatedly vote frequency n umVote, minimum ballot frequency n umFinalPass needed for final noise identification, random block count numCross, single ballot sorter frequency n umClassifier, minimum ballot frequency n umPass needed for single noise identification, initialization is repeatedly voted iterations t=1, peripheral iterations m=1, the pending sample set E=D of initialization;
Step 2) E is divided into numCross subset of the same size at random initiation parameter i=1;
Step 3) use in set, sample does training data, selects the sorting algorithm that numClassifier different, the sorter H that training numClassifier is different 1, H 2..., H numClassifier;
Step 4) use H 1, H 2..., H numClassifierto sample set middle sample classification, adds up each sample by the frequency n umWrong of mis-classification, if numWrong is more than or equal to specify threshold value numPass, then this sample is classified as suspicious noise by this ballot;
Step 5) iteration execution step 2) to 4), after each iteration, i value adds 1, until i value equals numCross, stops iteration, generates suspicious noise set;
Step 6) iteration execution step 2) to 5), after each iteration, t value adds 1, until t=numVote, generates numVote suspicious noise set;
Step 7) comprehensive analysis numVote suspicious noise set, if the frequency n umExist of sample appearance in numVote set is more than or equal to specify threshold value numFinalPass, then basis repeatedly voting results, assert that this sample is noise, if based on the m time iteration, the noise set of generation is
Step 8) iteration performs step 2) to 7), after each iteration, m value adds 1, until or till m=maxIter;
Step 9) return E value, E is the purified sample collection after erased noise, and algorithm terminates.
Further, described step 3) in, numClassifier is chosen to be odd number, selects odd number to be conducive to the realization of voting.Sorting algorithm is k next-door neighbour, decision tree, Bayes, neural network, one or more in Support Vector Machine.And the selected of numClassifier affects by data set.During Small Sample Database collection, for ensureing otherness between multi-categorizer, larger numClassifier value should be taked.When sample set label noise is higher, larger numClassifier value also should be taked.Namely larger numClassifier can ensure the high label noise identification rate of each iteration, contributes to again reducing iterations, improves efficiency of algorithm.On the other hand, when sample set number is comparatively large and sample label noise ratio is lower, less numClassifier can be selected.As can numClassifier=3 be arranged.
Another kind of improvement, described step 4) in, described numPass value is chosen to be numClassifier/2 or numClassifier.It is larger that numPass value is arranged, and detects stricter.Accordingly, detect stricter, holding data, to be used as the possibility of noise less, label noise when the possibility that perform data is larger.
Another kind of to improve, described step 7) in numFinalPass value can select some conventional value, as numVote/2 or numVote.Also can by independently verifying sample, the numFinalPass numerical value of calculation optimization.Concrete steps comprise: noise ratio a) estimating pending noise data according to priori, b) in verification sample, random noise is added, c) traversal institute likely numFinalPass numerical value under calculating this numerical value, this algorithm is to the recognition accuracy of noise in verification sample, d) selection has the numFinalPass of the highest recognition accuracy.It is larger that numVote value is arranged, and detects stricter, and accordingly, holding data, to be used as the possibility of noise less, and the possibility that label noise ought be performed data is larger.NumFinalPass value should be echoed with numPass herein, if numPass is too small, then numFinalPass should strengthen, in order to avoid too much good sample is taken as noise, in like manner, if numPass value is excessive, then numFinalPass should reduce, in order to avoid too much noise sample has been taken as sample.
The invention has the beneficial effects as follows: the iterative label noise identification algorithm based on repeatedly voting of the present invention adopts mode of repeatedly voting to carry out noise identification, before each ballot, upset sample order at random, therefore ensure that the otherness of ballot, compare with traditional single ballot mode, repeatedly voting method has more dirigibility and accuracy, single temporal voting strategy is crossed often and is sent or tension, and repeatedly vote to do single voting results in another aspect and gather, therefore, it is possible to meet the requirement of different types of data and noise ratio.In addition, additionally use iterative recognition methods in recognizer, the sample to be detected inputted during each iteration, the purified sample exported for filter out noise during last iteration, can identify institute's noise data more comprehensively thoroughly.Recognizer of the present invention solves the not high problem of existing label noise identification algorithm recognition accuracy, ensure that the pin-point accuracy of noise identification.
Accompanying drawing explanation
Fig. 1 is the process flow diagram that the present invention is based on the iterative label noise identification calculation of repeatedly voting.
Embodiment
Below in conjunction with accompanying drawing, a kind of iterative label noise identification algorithm based on repeatedly voting that the present invention proposes is described in detail.
As shown in Figure 1, the iterative label noise identification algorithm based on repeatedly voting of the present invention, comprises the following steps:
Step 1) determine algorithm input variable, comprise pending sample set D, maximum iteration time maxIter, repeatedly vote frequency n umVote, minimum ballot frequency n umFinalPass needed for final noise identification, random block count numCross, single ballot sorter frequency n umClassifier, minimum ballot frequency n umPass needed for single noise identification, initialization is repeatedly voted iterations t=1, peripheral iterations m=1, the pending sample set E=D of initialization;
Step 2) E is divided into numCross subset of the same size at random initiation parameter i=1;
Step 3) use in set, sample does training data, selects the sorting algorithm that numClassifier different, the sorter H that training numClassifier is different 1, H 2..., H numClassifier; NumClassifier is chosen to be odd number, such as 3,5,7 etc., certainly, be not limited to these cited odd numbers; Sorting algorithm is k next-door neighbour, decision tree, Bayes, neural network, one or more in Support Vector Machine.
Step 4) use H 1, H 2..., H numClassifierto sample set middle sample classification, adds up each sample by the frequency n umWrong of mis-classification, if numWrong is more than or equal to specify threshold value numPass, then this sample is classified as suspicious noise by this ballot.It is larger that numPass value is arranged, and detects stricter, and accordingly, holding data, to be used as the possibility of noise less, and the possibility that label noise ought be performed data is larger.Therefore, numPass value is preferably numClassifier/2 or numClassifier, and this is preferable examples, and other suitable numerical value all can be used as a kind of selection.
Step 5) iteration execution step 2) to 4), after each iteration, i value adds 1, until i value equals numCross, stops iteration, generates suspicious noise set;
Step 6) iteration execution step 2) to 5), after each iteration, t value adds 1, until t=numVote, generates numVote suspicious noise set;
Step 7) comprehensive analysis numVote suspicious noise set, if the frequency n umExist of sample appearance in numVote set is more than or equal to specify threshold value numFinalPass, then basis repeatedly voting results, assert that this sample is noise, if based on the m time iteration, the noise set of generation is nalPass value be preferably for numVote/2 or numVote, numVote value arrange larger, detect stricter, accordingly, holding data, to be used as the possibility of noise less, and the possibility that label noise ought be performed data is larger.NumFinalPass value should be echoed with numPass herein, if numPass is too small, then numFinalPass should strengthen, in order to avoid too much good sample is taken as noise, in like manner, if numPass value is excessive, then numFinalPass should reduce, in order to avoid too much noise sample has been taken as sample.
Step 8) iteration performs step 2) to 7), after each iteration, m value adds 1, until or till m=maxIter;
Step 9) return E value, E is the purified sample collection after erased noise, and algorithm terminates.
Below describe the present invention in detail to the test result of 2 groups of data in UCI database and the improvement comparing performance with other label noise recognizing methods.Identification in this paper compares with MajorityFiltering and ConsensusFiltering the most popular at present.Because data do not exist label noise in original UCI database, in the present embodiment, people, for adding noise, considers different noise ratios, comprises 10%, 20%, 30%, 40%.In this example, the marked erroneous number measurement by mistake of label noise detection algorithm performance.This error number comprises two parts, and a part is the diagnosis as well data of noise data mistake, represents with E1, another part be error in data be diagnosed as noise data, represent with E2.E1+E2 value is less, shows that algorithm accuracy rate is higher.
Table 1-data set
Data set Sample number Characteristic number
breast 699 9
wdbc 569 31
Middle optimum configurations is as follows: numCross=3, numClassifier=3 (three kinds of sorting algorithms comprise naive Bayesian, decision tree and arest neighbors), maxIter=100, numVote=5; It is numPass=2, numFinalPass=5 (being called IMFCF algorithm) that numPass and numFinalPass has two kinds to combine one, and another kind of combination is numPass=3, numFinalPass=3 (being called ICFMF algorithm).
Table 2-breast data set, result under 10% noise ratio
Table 3-breast data set, result under 20% noise ratio
Table 4-breast data set, result under 30% noise ratio
Table 5-breast data set, result under 40% noise ratio
Table 6-wdbc data set, result under 10% noise ratio
Table 7-wdbc data set, result under 20% noise ratio
Table 8-wdbc data set, result under 30% noise ratio
Table 9-wdbc data set, result under 40% noise ratio
More than show to show in 2-9, in two data of experiment, based on different noise ratios, the algorithmic stability that the present invention proposes be better than two kinds of traditional algorithms.
In sum, above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., it all should be encompassed in the middle of right of the present invention.

Claims (6)

1., based on an iterative label noise identification algorithm of repeatedly voting, it is characterized in that, comprise the following steps:
Step 1) determine algorithm input variable, comprise pending sample set D, maximum iteration time maxIter, repeatedly vote frequency n umVote, minimum ballot frequency n umFinalPass needed for final noise identification, random block count numCross, single ballot sorter frequency n umClassifier, minimum ballot frequency n umPass needed for single noise identification, initialization is repeatedly voted iterations t=1, peripheral iterations m=1, the pending sample set E=D of initialization;
Step 2) E is divided into numCross subset of the same size at random initiation parameter i=1;
Step 3) use in set, sample does training data, selects the sorting algorithm that numClassifier different, the sorter H that training numClassifier is different 1, H 2..., H numClassifier;
Step 4) use H 1, H 2..., H numClassifierto sample set middle sample classification, adds up each sample by the frequency n umWrong of mis-classification, if numWrong is more than or equal to specify threshold value numPass, then this sample is classified as suspicious noise by this ballot;
Step 5) iteration execution step 2) to 4), after each iteration, i value adds 1, until i value equals numCross, stops iteration, generates suspicious noise set;
Step 6) iteration execution step 2) to 5), after each iteration, t value adds 1, until t=numVote, generates numVote suspicious noise set;
Step 7) comprehensive analysis numVote suspicious noise set, if the frequency n umExist of sample appearance in numVote set is more than or equal to specify threshold value numFinalPass, then basis repeatedly voting results, assert that this sample is noise, if based on the m time iteration, the noise set of generation is
Step 8) iteration performs step 2) to 7), after each iteration, m value adds 1, until or till m=maxIter;
Step 9) return E value, E is the purified sample collection after erased noise, and algorithm terminates.
2. the iterative label noise identification algorithm based on repeatedly voting according to claim 1, is characterized in that: described step 3) in, numClassifier is chosen to be odd number.
3. the iterative label noise identification algorithm based on repeatedly voting according to claim 2, is characterized in that: arrange described numClassifier=3.
4. the iterative label noise identification algorithm based on repeatedly voting according to claim 1, is characterized in that: described step 4) in, described numPass value is chosen to be numClassifier/2 or numClassifier.
5. the iterative label noise identification algorithm based on repeatedly voting according to claim 1, is characterized in that: described step 7) selected numVote/2 or numVote of middle numFinalPass value.
6. the iterative label noise identification algorithm based on repeatedly voting according to claim 1, is characterized in that: described step 7) in numFinalPass value by independently verifying sample, calculation optimization; Concrete steps comprise: noise ratio a) estimating pending noise data according to priori, b) in verification sample, random noise is added, c) traversal institute likely numFinalPass numerical value under calculating this numerical value, this algorithm is to the recognition accuracy of noise in verification sample, d) selection has the numFinalPass of the highest recognition accuracy.
CN201510490699.9A 2015-08-11 2015-08-11 Iterative tag noise recognition algorithm based on multiple voting Pending CN105046236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510490699.9A CN105046236A (en) 2015-08-11 2015-08-11 Iterative tag noise recognition algorithm based on multiple voting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510490699.9A CN105046236A (en) 2015-08-11 2015-08-11 Iterative tag noise recognition algorithm based on multiple voting

Publications (1)

Publication Number Publication Date
CN105046236A true CN105046236A (en) 2015-11-11

Family

ID=54452765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510490699.9A Pending CN105046236A (en) 2015-08-11 2015-08-11 Iterative tag noise recognition algorithm based on multiple voting

Country Status (1)

Country Link
CN (1) CN105046236A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292330A (en) * 2017-05-02 2017-10-24 南京航空航天大学 A kind of iterative label Noise Identification algorithm based on supervised learning and semi-supervised learning double-point information
CN108509969A (en) * 2017-09-06 2018-09-07 腾讯科技(深圳)有限公司 Data mask method and terminal
CN110060247A (en) * 2019-04-18 2019-07-26 深圳市深视创新科技有限公司 Cope with the robust deep neural network learning method of sample marking error
CN110163376A (en) * 2018-06-04 2019-08-23 腾讯科技(深圳)有限公司 Sample testing method, the recognition methods of media object, device, terminal and medium
CN111352966A (en) * 2020-02-24 2020-06-30 交通运输部水运科学研究所 Data tag calibration method in autonomous navigation
CN112562730A (en) * 2020-11-24 2021-03-26 北京华捷艾米科技有限公司 Sound source analysis method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040083270A1 (en) * 2002-10-23 2004-04-29 David Heckerman Method and system for identifying junk e-mail
CN101330476A (en) * 2008-07-02 2008-12-24 北京大学 Method for dynamically detecting junk mail

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040083270A1 (en) * 2002-10-23 2004-04-29 David Heckerman Method and system for identifying junk e-mail
CN101330476A (en) * 2008-07-02 2008-12-24 北京大学 Method for dynamically detecting junk mail

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONGHAI GUAN等: "Class Noise Detection by Multiple Voting", 《2013 NINTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292330A (en) * 2017-05-02 2017-10-24 南京航空航天大学 A kind of iterative label Noise Identification algorithm based on supervised learning and semi-supervised learning double-point information
CN107292330B (en) * 2017-05-02 2021-08-06 南京航空航天大学 Iterative label noise identification algorithm based on double information of supervised learning and semi-supervised learning
CN108509969A (en) * 2017-09-06 2018-09-07 腾讯科技(深圳)有限公司 Data mask method and terminal
CN110163376A (en) * 2018-06-04 2019-08-23 腾讯科技(深圳)有限公司 Sample testing method, the recognition methods of media object, device, terminal and medium
CN110163376B (en) * 2018-06-04 2023-11-03 腾讯科技(深圳)有限公司 Sample detection method, media object identification method, device, terminal and medium
CN110060247A (en) * 2019-04-18 2019-07-26 深圳市深视创新科技有限公司 Cope with the robust deep neural network learning method of sample marking error
CN111352966A (en) * 2020-02-24 2020-06-30 交通运输部水运科学研究所 Data tag calibration method in autonomous navigation
CN112562730A (en) * 2020-11-24 2021-03-26 北京华捷艾米科技有限公司 Sound source analysis method and system

Similar Documents

Publication Publication Date Title
CN105046236A (en) Iterative tag noise recognition algorithm based on multiple voting
CN107292330B (en) Iterative label noise identification algorithm based on double information of supervised learning and semi-supervised learning
Chung et al. Slice finder: Automated data slicing for model validation
Hassan et al. Detecting prohibited items in X-ray images: A contour proposal learning approach
CN110969166A (en) Small target identification method and system in inspection scene
CN107229942A (en) A kind of convolutional neural networks rapid classification method based on multiple graders
CN102324046A (en) Four-classifier cooperative training method combining active learning
CN102346829A (en) Virus detection method based on ensemble classification
CN107830996B (en) Fault diagnosis method for aircraft control surface system
CN102254193A (en) Relevance vector machine-based multi-class data classifying method
CN104657574B (en) The method for building up and device of a kind of medical diagnosismode
CN106326913A (en) Money laundering account determination method and device
CN117033912B (en) Equipment fault prediction method and device, readable storage medium and electronic equipment
CN103020643A (en) Classification method based on kernel feature extraction early prediction multivariate time series category
CN109255029A (en) A method of automatic Bug report distribution is enhanced using weighted optimization training set
CN108416373A (en) A kind of unbalanced data categorizing system based on regularization Fisher threshold value selection strategies
CN108509996A (en) Feature selection approach based on Filter and Wrapper selection algorithms
CN112307860A (en) Image recognition model training method and device and image recognition method and device
CN104615789A (en) Data classifying method and device
CN107480441B (en) Modeling method and system for children septic shock prognosis prediction
CN106951728B (en) Tumor key gene identification method based on particle swarm optimization and scoring criterion
CN114254146A (en) Image data classification method, device and system
CN102945238A (en) Fuzzy ISODATA (interactive self-organizing data) based feature selection method
CN111209939A (en) SVM classification prediction method with intelligent parameter optimization module
CN103810210B (en) Search result display methods and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151111

WD01 Invention patent application deemed withdrawn after publication