CN102646198A - Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure - Google Patents

Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure Download PDF

Info

Publication number
CN102646198A
CN102646198A CN2012100399031A CN201210039903A CN102646198A CN 102646198 A CN102646198 A CN 102646198A CN 2012100399031 A CN2012100399031 A CN 2012100399031A CN 201210039903 A CN201210039903 A CN 201210039903A CN 102646198 A CN102646198 A CN 102646198A
Authority
CN
China
Prior art keywords
sample
sorter
training
classification
mlsvms
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100399031A
Other languages
Chinese (zh)
Other versions
CN102646198B (en
Inventor
张笑钦
王迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN201210039903.1A priority Critical patent/CN102646198B/en
Publication of CN102646198A publication Critical patent/CN102646198A/en
Application granted granted Critical
Publication of CN102646198B publication Critical patent/CN102646198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a mode recognition method of a mixed linear SVM (support vector machine) classifier with a hierarchical structure. The iteration process of the mode recognition method mainly comprises the following steps of: firstly updating the training weight of each sample by applying the neighbor geometrical structural relation of training data and the classification error of the current H-MLSVMs classifier; and secondly training a linear SVM classifier with a weight by utilizing the samples with the weights, embedding the classifier into the current H-MLSVMs, and updating the classification error of the classifier. The mode recognition method disclosed by the invention is an effective mode recognition and classification method and is a universal method. Experimental results show that compared with other classical SVM methods, the classification method has the popularization capability which is always the same with a kernel function-based SVM method, has no shortcomings of difficulty in selection of a kernel function and parameters thereof, very large occupation of a memory and the like and particularly has the classification time complexity in the same order with the linear SVM; and furthermore, the classification method is more effective when the classification method is used for classifying new samples, can be further applied to a real-time mode recognition system, and has great application prospects.

Description

Mode identification method with mixed linear svm classifier device of hierarchical structure
Technical field
The present invention relates to computer patterns identification field, particularly relate to a kind of mode identification method with mixed linear svm classifier device of hierarchical structure.
Background technology
Classifier design is the research emphasis in the computer patterns identification field, because sorter is the basic tool as pattern identification research.Generally speaking, present main two critical problems of classifier algorithm: the popularization ability of (1) sorter; (2) new samples is carried out the spent time of branch time-like.
In general, the popularization ability of sorter is exactly the ability of sorter prediction unknown sample classification, the i.e. height of classification accuracy rate.Based on Statistical Learning Theory and the theoretical svm classifier device of structural risk minimization; Because it has advantages such as globally optimal solution and excellent popularization ability; Become the most successful in recent years sorter, be widely used in various fields such as artificial intelligence, machine learning, Neurobiology, medical science.Its initial motivation is exactly two types of set of data samples for linear separability, finds out the largest interval that can separate these two types of samples.But in practical application, many grouped datas are linear inseparable.For this reason; People have introduced the notion of kernel function; The data of luv space are mapped in higher-dimension even the infinite dimension nuclear space through nuclear; And these are mapped to data in the nuclear space and can be regarded as that linear separability or approximately linear can divide, thereby have solved for the inseparable classification of Data problem of linearity.Though successfully expanded the range of application of linear SVM based on the SVM of kernel method, and the excellent popularization ability is arranged, it still has following three big shortcomings:
1) difficulty selected of kernel function and parameter thereof: a lot of kernel functions are arranged at present, and like polynomial kernel function, batten kernel function, basic kernel function etc. radially, each kernel function has parameter separately again.Kernel function and parameters of choice thereof have very important effect for the popularization ability of svm classifier device.How to select suitable kernel function and parameter thereof to make the popularization ability of sorter reach and preferably remain a difficult problem that needs to be resolved hurrily.
2) need occupy very big internal memory during training classifier: need store an O (l when the training based on the SVM of kernel function 2) non-sparse nuclear matrix, wherein l is the number of training sample.This just makes SVM can not handle the classification problem of big data quantity.
3) the classification time complexity is high: the svm classifier device based on kernel function is carrying out the branch time-like to new samples, needs to calculate the kernel function between this new samples and all support vectors.Number linear dependence that so its complicated classification degree is a calcaneus branches holds vector.And the growth of the number of Steinwart proof support vector increases linear with the training sample number at least.Therefore, the sorter that big data quantity trains out, to the classification time of new data sample will be very slow, this just makes it can not apply in some real-time mode recognizing systems, such as pedestrian detecting system and aircraft vision navigation system.
In order to overcome the above-mentioned defective based on kernel function SVM, the method for some structural classification devices in original data space has been carried.Fu has proposed the linear svm classifier device of a mixing based on the joint distribution model of training data and label.But, because the joint distribution of data is difficult to estimate that when the DATA DISTRIBUTION of reality did not conform to the probability model of hypothesis, the popularization ability of sorter will reduce greatly.Li has proposed a piecewise linear classifier---many convex lines property perceptron based on the notion of " the protruding branch ".But because this sorter do not introduce the notion at soft interval, so it can not handle those two types of grouped datas that mutual overlapping is arranged, so just greatly limitations range of application.
Summary of the invention
The present invention provides a kind of mode identification method---H-MLSVMs with mixed linear svm classifier device of hierarchical structure, and it has the hierarchical tree structure, and each node place all corresponding the linear svm classifier device of a mixing.With respect to other classical SVM method; This sorting technique has and based on the SVM method of kernel function popularization ability about the same; Do not select difficulty, take very shortcoming such as big internal memory and do not exist kernel function and parameter thereof; Particularly have the classification time complexity with rank such as linear SVM, make this sorting technique divide time-like more effective new samples.
To achieve these goals, the technical scheme below the present invention has adopted:
Mode identification method with mixed linear svm classifier device of hierarchical structure, the sorter at its each node place all corresponding an iterative process.Iterative process mainly comprises following two steps each time:
(1) uses the neighbour's geometric relationship of training data and the error in classification of current H-MLSVMs sorter, upgrade the training weight of each sample;
(2) utilize the sample training that has weight to go out the heavy linear svm classifier device of a cum rights, this sorter is embedded among the current H-MLSVMs, and upgrades the error in classification of sorter, turn back to the first step.
Wherein, H-MLSVMs root node sorter is to obtain through the linear SVM that trains equal weight.Above-mentioned iterative process until among the H-MLSVMs number of all node place sorters reach a given integer, perhaps when the increasing degree of accuracy on the test data set is within enough little scope, stop.
Furtherly, described step (1) specifically comprises following substep:
At first, select " training centre's sample " through the neighbour's geometric relationship of training data and the error in classification of current H-MLSVMs sorter, promptly near the zone this " training centre's sample " is the most serious zone of the wrong branch of current H-MLSVMs sorter;
Secondly, according to each sample and the distance between the last step selected " training centre's sample ", calculated the training weight of each sample;
At last, in order to prevent to train weight to be partial to a certain type, in all kinds of, will train weight to do normalization and handle.
Furtherly, described step (2) specifically comprises following substep:
At first, the training sample that has weight above the utilization trains the heavy linear svm classifier device of a cum rights;
Secondly, combine " training centre's sample " to belong to the sorter at father and mother's node place of node, obtain a mixed linear svm classifier device;
Once more, with the sorter of this mixed linear svm classifier device as node place, " training centre's sample " place;
At last, upgrade the error in classification of sorter.
The invention has the beneficial effects as follows:
1, sorting algorithm proposed by the invention is a kind of method in common, and the arbitrary classification data type all is suitable for.
2, the present invention proposes a kind of self-adaptation adjustment weight strategy; Can automatically adjust the training weight of each sample according to current error in classification; The method that weight is set than classics; Owing to not only considered error in classification, and incorporated the geometric relationship of data, make this weight method to set up to noise point robust more.
3, the present invention has adopted a kind of sorter of hierarchical tree structure; Wherein each node place all exists a mixed linear svm classifier device; Its basic thought is exactly that layering ground is separated the positive sample of mistake branch from negative sample, and the negative sample that perhaps mistake is divided is separated from positive sample.Linear svm classifier device than classics has greatly improved the popularization ability to the inseparable data of linearity; Svm classifier device than classics based on kernel function; The popularization ability of this H-MLSVMs sorter can be similar to the popularization ability that reaches nuclear svm classifier device; And overcome nuclear SVM in training process kernel function and selection of parameter thereof difficulty, consume shortcoming such as internal memory; The more important thing is to make the classification time complexity that needs reduce greatly, thereby can be used in the real-time mode recognizing system.
Description of drawings
Below in conjunction with accompanying drawing and embodiment the present invention is further specified.
Fig. 1 is the training process flow diagram of H-MLSVMs sorter of the present invention;
Fig. 2 is the two-dimentional example exploded view of sorter training process of the present invention;
Fig. 3 is the process flow diagram of sorter of the present invention to the assorting process of new samples.
Embodiment
Through embodiment the present invention is carried out concrete description below; Only be used for the present invention is further specified; Can not be interpreted as the qualification to protection domain of the present invention, the technician in this field can make some nonessential improvement and adjustment to the present invention according to the content of foregoing invention.
Fig. 1 is the training process flow diagram of H-MLSVMs sorter of the present invention, and it is a kind of mode identification method with mixed linear svm classifier device of hierarchical structure.Fig. 2 is that the two-dimentional example of sorter training process of the present invention is showed.Fig. 3 is the process flow diagram of H-MLSVMs sorter to the assorting process of new samples.The hardware and the programming language of the concrete operation of method of the present invention do not limit, and can accomplish with any language, and other mode of operation repeats no more for this reason.
Embodiments of the invention adopt a Pentium 4 computing machine with 3.2G hertz central processing unit and 1G byte of memory also to work out the working routine of H-MLSVMs sorter with the Matlab language, have realized method of the present invention.
Mode identification method with mixed linear svm classifier device of hierarchical structure of the present invention mainly comprises following two steps: the renewal of the renewal of training sample weight and hierarchical structure sorter H-MLSVMs.
Before introducing concrete steps, below we introduce earlier the meaning of the symbol that will use.
S: the training sample set, promptly S = { ( x i , y i ) | x i ∈ R n , y i ∈ { - 1,1 } } i = 1 l
I: the node in the expression H-MLSVMs sorter.I=[0] represents root node, as
Its division of fruit, [0,1] and [0 ,-1] is its left child's node and right child's node just.Similarly, if node [0,1] divides again, then [0,1,1] and [0,1 ,-1] is its left child's node and right child's node just.If a node no longer divides, then this node just is called leafy node.The number of element is being represented the number of plies of node I in H-MLSVMs among the I.Shown in Fig. 2 (f), node [0,1,1,1] is at the 4th layer of H-MLSVMs;
F (t): in the process of training H-MLSVMs sorter, the linear SVM that the cum rights that is calculated during the t time iteration is heavy;
F I: at the sorter (shown in Fig. 2 (f)) at node I place, wherein the leafy node place does not have sorter;
S I: be divided into node I place sample set.Shown in Fig. 2 (f), all training samples the most all are divided into the root node place, that is:
Figure BDA0000137195420000042
Sorter f at the root node place [0]With S set [0]Be divided into S set [0,1]And S [0 ,-1]Sorter f at [0,1] node place [0,1]Again with S [0,1]Be divided into S [0,1,1]And S [0,1 ,-1]Similarly, the set at other node places also can obtain.We can find out that the set at leaf node place is non-intersect in twos, and their union has constituted whole training samples.If among the note leafy node I last element be I (| I|), then I (| I|) determined S set IThe category attribute of middle sample.I (| I|)=1 (I (| I|)=-1) represents S IMiddle sample is divided into positive type (negative type) by the H-MLSVMs sorter;
The number of all node place sorters among the M:H-MLSVMs;
F t: the set of node place sorter among the H-MLSVMs after the t time iteration, shown in Fig. 2 (f), F 4={ f [0], f [0,1], f [0,1,1], f [0 ,-1] }
V ( t ) : V ( t ) = [ V 1 ( t ) , . . . , V l ( t ) ] , Wherein
Figure BDA0000137195420000052
Be illustrated in H-MLSVMs after the t time iteration
Sorter is to sample x iPredicted value.
Red circle is represented positive sample among Fig. 2, blue fork expression negative sample.The training process and the classification process of H-MLSVMs sorter are described with reference to figure 2 below, and concrete steps are described below:
Initial H-MLSVMs: all training samples all are divided into the root node place, that is:
Figure BDA0000137195420000053
Training sample with equal weight trains a linear svm classifier device f (1)(shown in Fig. 2 (a)), and give root node place sorter, i.e. f with it [0]=f (1)f [0]With S set [0]Be divided into S set [0,1]And S [0 ,-1], S wherein [0,1]={ x|f [0](x)>=0, x ∈ S [0], S [0 ,-1]={ x|f [0](x)<0, x ∈ S [0], can find out S [0,1]In the negative sample that is divided by mistake is arranged in a large number, S [0 ,-1]In the positive sample that is divided by mistake is arranged in a large number.This moment H-MLSVMs F 1Structure such as Fig. 2 (f) shown in, F 1Predicted value to training sample is V (1)=F 1(S [0])=f [0](S [0]).
Get into the cycle stage below, it mainly contains following two steps
(1) uses the neighbour's geometric relationship of training data and the error in classification of current H-MLSVMs sorter, upgrade the training weight of each sample.It mainly comprises:
A) find out " training centre's sample " (shown in Fig. 2 (b) Smalt triangle)
Figure BDA0000137195420000054
Neighbours, it is defined as N ε(x i)={ x j|| | x j-x i||<ε, y i=y i, (x j, y j) ∈ S}, the general value of ε does
Figure BDA0000137195420000055
B) according to the distance calculation sample weights of sample, promptly apart from " training centre's sample "
v ~ i = exp { - | | x i - x c | | - _ 2 }
C) train weight to be partial to a certain type in order to prevent; To train weight to do normalization processing
Figure BDA0000137195420000057
shown in Fig. 2 (b) in all kinds of, the size of mark is being represented the size of sample weights.
Can find out that from top computation process we are with sorter F 1Mistake divides the sample in the most serious zone to give very big weight.
(2) utilize the sample training that has weight to go out the heavy linear svm classifier device of a cum rights, this sorter is embedded among the current H-MLSVMs, and upgrade the error in classification of sorter.It mainly comprises:
A) shown in Fig. 2 (b), heavy with cum rights
Figure BDA0000137195420000061
Training sample train the heavy linear svm classifier device f of cum rights (2)(rhodo colo(u)r streak among Fig. 2 (b));
B) suppose x c∈ S I, S set promptly is described IThe middle wrong sample that divides is the most serious.Make I 1=I (1:|I|-1), I 2=I (| I|), I then 1Be father and mother's node of I.Datum node I 1Place's sorter obtains a mixed linear svm classifier device.If I (| I|)=1, then this mixed linear svm classifier device does
Figure BDA0000137195420000062
If I (| I|)=-1, then this mixed linear svm classifier device does
Figure BDA0000137195420000063
In Fig. 2 (b), because I=[0,1], so mixed linear svm classifier device is min{f (2), f [0];
C) the linear svm classifier device min{f that obtains above the general (2), f [0]Give the node sorter f that [0,1] is located [0,1], f [0,1]Further with S set [0,1]Be divided into S [0,1,1]And S [0,1 ,-1], S wherein [0,1,1]={ x|f [0,1](x)>=0, x ∈ S [0,1], S [0,1 ,-1]={ x|f [0,1](x)<0, x ∈ S [0,1], can find out, with f [0,1]S set [0,1]A middle part is by F 1The wrong negative sample that divides is separated.The H-MLSVMs of this moment is F 2=F 1∪ { f [0,1], can find out F 2Compare F 1Had more a sorter f of node place [0,1]Thereby, improved classification accuracy rate.Shown in its structure such as Fig. 2 (f);
D) calculate F 2To the predicted value of training sample, promptly Return (1).
Repeat to be similar to the process of step (1) (2), we have obtained H-MLSVMs sorter F 3(shown in Fig. 2 (c)) and F 4(shown in Fig. 2 (d)), its structure is respectively shown in Fig. 2 (f).Above-mentioned iterative process until among the H-MLSVMs number of all node place sorters reach a given integer, perhaps when the increasing degree of accuracy on the test data set is within enough little scope, stop.In test, we adopt first kind of iteration stopping method.Fig. 2 (e) is the classification boundaries through H-MLSVMs sorter after 25 iteration.
A sample needs H-MLSVMs to divide time-like when newly arriving; Let this new sample arrive certain leafy node from the root node of hierarchical structure sorter H-MLSVMs always; Judge its classification according to the functional value that calculates, concrete classification process is as shown in Figure 3.
Because F MIn comprised the sorter at all node places among the H-MLSVMs, so among Fig. 2 " " indicate that x has arrived leafy node, export its functional value and get final product, if its functional value more than or equal to zero, it be positive type, otherwise is negative class." f [I, 1]∈ F M(f [I ,-1]∈ F M) " mean that x does not also arrive leafy node, need with sorter " f [I, 1](f [I ,-1]) " further divide.Shown in sample x (pentagram) among Fig. 2 (c), the path that first three step is walked in its assorting process is shown in Fig. 2 (f) arrow.

Claims (3)

1. mode identification method with mixed linear svm classifier device of hierarchical structure is characterized in that may further comprise the steps:
(1) uses the neighbour's geometric relationship of training data and the error in classification of current H-MLSVMs sorter, upgrade the training weight of each sample;
(2) utilize the sample training that has weight to go out the heavy linear svm classifier device of a cum rights, this sorter is embedded among the current H-MLSVMs, and upgrades the error in classification of sorter, turn back to the first step;
Wherein, H-MLSVMs root node sorter is to obtain through the linear SVM that trains equal weight; The number of above-mentioned iterative process all node place sorters in H-MLSVMs reaches a given integer; Perhaps when the accuracy on the test data set increases within enough little scope, stop.
2. the mode identification method with mixed linear svm classifier device of hierarchical structure according to claim 1 is characterized in that described step (1) specifically comprises following substep:
At first, select " training centre's sample " through the neighbour's geometric relationship of training data and the error in classification of current H-MLSVMs sorter, promptly near the zone this " training centre's sample " is the most serious zone of the wrong branch of current H-MLSVMs sorter;
Secondly, according to each sample and the distance between the last step selected " training centre's sample ", calculated the training weight of each sample;
At last, in order to prevent to train weight to be partial to a certain type, in all kinds of, will train weight to do normalization and handle.
3. the mode identification method with mixed linear svm classifier device of hierarchical structure according to claim 1 is characterized in that described step (2) specifically comprises following substep:
At first, the training sample that has weight above the utilization trains the heavy linear svm classifier device of a cum rights;
Secondly, combine " training centre's sample " to belong to the sorter at father and mother's node place of node, obtain a mixed linear svm classifier device;
Once more, with the sorter of this mixed linear svm classifier device as node place, " training centre's sample " place;
At last, upgrade the error in classification of sorter.
CN201210039903.1A 2012-02-21 2012-02-21 Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure Active CN102646198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210039903.1A CN102646198B (en) 2012-02-21 2012-02-21 Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210039903.1A CN102646198B (en) 2012-02-21 2012-02-21 Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure

Publications (2)

Publication Number Publication Date
CN102646198A true CN102646198A (en) 2012-08-22
CN102646198B CN102646198B (en) 2014-12-17

Family

ID=46659014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210039903.1A Active CN102646198B (en) 2012-02-21 2012-02-21 Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure

Country Status (1)

Country Link
CN (1) CN102646198B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945371A (en) * 2012-10-18 2013-02-27 浙江大学 Classifying method based on multi-label flexible support vector machine
CN102945372A (en) * 2012-10-18 2013-02-27 浙江大学 Classifying method based on multi-label constraint support vector machine
CN105654124A (en) * 2015-12-29 2016-06-08 大连楼兰科技股份有限公司 Method for increasing training speed and convergence speed of Adboost
CN109350072A (en) * 2018-11-15 2019-02-19 北京航空航天大学 A kind of cadence detection method based on artificial neural network
CN110222727A (en) * 2019-05-15 2019-09-10 广东电网有限责任公司电力调度控制中心 A kind of short-term load forecasting method and device based on deep neural network
CN113537137A (en) * 2021-08-02 2021-10-22 浙江索思科技有限公司 Escalator-oriented human body motion intrinsic feature extraction method and system
CN117804670A (en) * 2024-02-28 2024-04-02 深圳市沃感科技有限公司 Error correction method and device for plate-type pressure sensor

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090281981A1 (en) * 2008-05-06 2009-11-12 Chen Barry Y Discriminant Forest Classification Method and System

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090281981A1 (en) * 2008-05-06 2009-11-12 Chen Barry Y Discriminant Forest Classification Method and System

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NICOL`O CESA-BIANCHI等: "《Hierarchical Classification: Combining Bayes with SVM》", 《PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON MACHINE LEARNING》 *
方敏: "《集成学习的多分类器动态融合方法研究》", 《系统工程与电子技术》 *
赵晖,荣莉莉,李晓: "《一种设计层次支持向量机多类分类器的新方法》", 《计算机应用研究》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945371A (en) * 2012-10-18 2013-02-27 浙江大学 Classifying method based on multi-label flexible support vector machine
CN102945372A (en) * 2012-10-18 2013-02-27 浙江大学 Classifying method based on multi-label constraint support vector machine
CN102945371B (en) * 2012-10-18 2015-06-24 浙江大学 Classifying method based on multi-label flexible support vector machine
CN102945372B (en) * 2012-10-18 2015-06-24 浙江大学 Classifying method based on multi-label constraint support vector machine
CN105654124A (en) * 2015-12-29 2016-06-08 大连楼兰科技股份有限公司 Method for increasing training speed and convergence speed of Adboost
CN105654124B (en) * 2015-12-29 2020-03-24 大连楼兰科技股份有限公司 Method for accelerating Adaboost training speed and convergence speed
CN109350072A (en) * 2018-11-15 2019-02-19 北京航空航天大学 A kind of cadence detection method based on artificial neural network
CN109350072B (en) * 2018-11-15 2020-08-04 北京航空航天大学 Step frequency detection method based on artificial neural network
CN110222727A (en) * 2019-05-15 2019-09-10 广东电网有限责任公司电力调度控制中心 A kind of short-term load forecasting method and device based on deep neural network
CN113537137A (en) * 2021-08-02 2021-10-22 浙江索思科技有限公司 Escalator-oriented human body motion intrinsic feature extraction method and system
CN117804670A (en) * 2024-02-28 2024-04-02 深圳市沃感科技有限公司 Error correction method and device for plate-type pressure sensor

Also Published As

Publication number Publication date
CN102646198B (en) 2014-12-17

Similar Documents

Publication Publication Date Title
CN102646198A (en) Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure
Chen et al. A fast clustering algorithm based on pruning unnecessary distance computations in DBSCAN for high-dimensional data
CN106504233B (en) Unmanned plane inspection image electric power widget recognition methods and system based on Faster R-CNN
CN103116762B (en) A kind of image classification method based on self-modulation dictionary learning
CN102999756B (en) The recognition methods of PSO-SVM to road sign is realized based on GPU
CN106611052A (en) Text label determination method and device
CN103020288B (en) Method for classifying data stream under a kind of dynamic data environment
CN109273096B (en) Medicine risk grading evaluation method based on machine learning
CN106557485A (en) A kind of method and device for choosing text classification training set
CN107632995A (en) The method and model training control system of Random Forest model training
CN104035954A (en) Hadoop-based recognition method for fake-licensed car
CN103020122A (en) Transfer learning method based on semi-supervised clustering
CN107832335A (en) A kind of image search method based on context deep semantic information
CN102289522A (en) Method of intelligently classifying texts
CN103336971B (en) Target matching method between multiple-camera based on multiple features fusion and incremental learning
CN110210538A (en) A kind of household image multiple-target identification method and device
CN106157375A (en) A kind of threedimensional model component categories automatic marking method
CN109241995A (en) A kind of image-recognizing method based on modified ArcFace loss function
CN109684476A (en) A kind of file classification method, document sorting apparatus and terminal device
CN106156805A (en) A kind of classifier training method of sample label missing data
CN107944472A (en) A kind of airspace operation situation computational methods based on transfer learning
Wei et al. Traffic sign detection and recognition via transfer learning
CN105224577A (en) Multi-label text classification method and system
Wang et al. Leaf recognition based on DPCNN and BOW
Liu et al. Tomato detection based on convolutional neural network for robotic application

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: 325000 incubator of National University Science Park, Zhejiang, Wenzhou

Applicant after: Wenzhou University

Address before: 325000 Wenzhou College Road, Zhejiang, No. 276

Applicant before: Wenzhou University

CB03 Change of inventor or designer information

Inventor after: Wang Di

Inventor after: Zhang Xiaoqin

Inventor after: Ye Xiuzi

Inventor before: Zhang Xiaoqin

Inventor before: Wang Di

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: ZHANG XIAOQIN WANG DI TO: WANG DI ZHANG XIAOQIN YE XIUZI

C14 Grant of patent or utility model
GR01 Patent grant