CN103489033A - Incremental type learning method integrating self-organizing mapping and probability neural network - Google Patents
Incremental type learning method integrating self-organizing mapping and probability neural network Download PDFInfo
- Publication number
- CN103489033A CN103489033A CN201310451473.9A CN201310451473A CN103489033A CN 103489033 A CN103489033 A CN 103489033A CN 201310451473 A CN201310451473 A CN 201310451473A CN 103489033 A CN103489033 A CN 103489033A
- Authority
- CN
- China
- Prior art keywords
- new
- self
- organizing maps
- sigma
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention provides an incremental type learning method integrating self-organizing mapping and a probability neural network. The method comprises the following steps that in initial learning, the sample distribution regularity is extracted from original samples through the self-organizing mapping, the original samples are divided into various classifications of training data sets, all classifications of training data sets are trained to obtain self-organizing mapping of the training data sets of every classification, prototype vectors of the trained self-organizing mapping are used as mode nerve cells to construct the probability neural network, if new data sets are data sets of known classifications, learning is partially adjusted, and if the new data sets are data sets of new classifications, independent self-organizing mapping is trained, and the prototype vectors of the new data sets are added to the probability neural network. The method overcomes the defect that in a traditional machine learning algorithm, a decision model is constructed generally based on static data sets, and knowledge in new usable data can not be effectively used.
Description
Technical field
The present invention relates to the machine learning techniques field, in particular to a kind of fusion Self-organizing Maps based on neural network and the incremental learning method of probabilistic neural network.
Background technology
Traditional machine learning algorithm data set based on static is usually constructed decision model, can not effectively utilize the knowledge lain in new data available.When new data available is arranged, traditional learning algorithm has to again train whole decision model, causes the high and inefficiency of computation complexity.The incremental learning technology is the approach effectively addressed this problem, and day by day obtains in recent years the attention of academia and industry member.
General new data available is divided into two large classes: a kind of is decision model known class target new data; Another kind is the new categorical data of the unknown class target of decision model, and the decision model with incremental learning ability should be able to effectively be processed this new data of two types.As for incremental learning itself, can be divided into again two different levels: feature level incremental learning and decision level incremental learning.In classification/forecasting problem, the step of a key is to extract effective diagnostic characteristics.A lot of traditional Feature Extraction Methods, for example principal component analysis (Principal Component Analysis, PCA), linear discriminant analysis (Linear Discriminant Analysis, LDA) etc. is all the data set construction feature extraction models by a static state.When new data available is arranged, training characteristics extraction model again must start anew.The feature level incremental learning is intended to utilize new data available to upgrade original feature extraction model, and without training again.For example classical PCA and LDA respectively studied personnel be extended to the feature extraction model with incremental learning ability, i.e. IPCA (Incremental PCA) and ILDA (Incremental LDA).The form that new data can increment in IPCA and ILDA is upgraded existing feature extraction model.From the feature level incremental learning, use new data regeneration characteristics extraction model different, the decision level incremental learning directly utilizes new data to carry out replaceme diiion model.
The present invention is the decision level incremental learning, proposes the incremental learning method of a kind of fusion Self-organizing Maps (SOM) and probabilistic neural network (PNN).As a kind of nonparametric technique, PNN itself very simply and on a lot of classification problems is doing well.Yet the important weak point that PNN exists is, need to use all training samples in the decision phase, thereby inevitably can cause storage space large, counting yield is low.In fact, in a lot of practical problemss, new data available can constantly produce, and the problems referred to above will further worsen.A lot of researchists have taked diverse ways to reduce the computation complexity of conventional P NN, keep its excellent characteristic (M.Feng simultaneously, et al., " Probabilistic segmentation of volume data for visualization using SOM-PNN classifier, " presented at the Proceedings of the1998IEEE symposium on Volume visualization, Research Triangle Park, North Carolina, United States, 1998. and D.J.Yu, et al., " SOMPNN:an efficient non-parametric model for predicting transmembrane helices, " Amino Acids, vol.42, pp.2195-205, Jun2012).For example, use clustering method (K-means, fuzzy C-means clustering) (Z.L.Wang, et al., " An Incremental Learning Method Based on Probabilistic Neural Networks and Adjustable Fuzzy Clustering for Human Activity Recognition by Using Wearable Sensors, " IEEE Transactions on Information Technology in Biomedicine, vol.16, pp.691-699, Jul2012.) training data is carried out to cluster, then use cluster centre to replace raw data to build PNN.Yet the defect of these class methods is the numbers that need to set in advance cluster centre, affected by subjectivity larger.Recently, we use SOM to be learnt training data, then use the prototype vector of the SOM trained to build PNN, the PNN structure obtained is like this compacted, and has significantly reduced complexity and memory space requirements (D.J.Yu that PNN calculates, et al., " SOMPNN:an efficient non-parametric model for predicting transmembrane helices, " Amino Acids, vol.42, pp.2195-205, Jun2012.).Although above-mentioned these methods can effectively solve some problems of conventional P NN, all do not possess the ability of incremental learning, can not effectively utilize the knowledge lain in new data available.
Summary of the invention
The object of the invention is to provide a kind of incremental learning method that merges Self-organizing Maps and probabilistic neural network; can overcome traditional machine learning algorithm usually the data set based on static construct decision model, and can not effectively utilize the defect that lies in the knowledge in new data available.
For reaching above-mentioned purpose, the technical solution adopted in the present invention is as follows:
A kind of incremental learning method that merges Self-organizing Maps and probabilistic neural network, be suitable for the incremental learning to dissimilar new data available, and the method comprises the following steps:
Initial learn: utilize Self-organizing Maps to extract the sample distribution rule from original sample, original sample, according to the classification under sample, is divided into to a plurality of training datasets; Then use the training dataset training of each classification to obtain an independently Self-organizing Maps;
Build probabilistic neural network: with the prototype vector of Self-organizing Maps after training, as pattern-neuron, build probabilistic neural network; And
The study of new data set comprises:
1) if the data set that new data set is known class is searched the Self-organizing Maps of this known class and carried out local regularized learning algorithm and obtains new Self-organizing Maps, then replace the Self-organizing Maps of original this known class with new Self-organizing Maps; And
2) if new data set is not the data set of known class, newly train an independently Self-organizing Maps, and use its prototype vector as such other pattern-neuron, for the structure of probabilistic neural network.
Further, in embodiment, the process of described initial learn is as follows:
Make original sample X=X
1∪ X
1∪ ... X
m∪ X
mfor initial training collection, wherein X
mfor the training dataset of classification m, at first, use each X
mtrain a Self-organizing Maps, be expressed as SOM
m; Use K
mmean SOM
m, the number of mesarcs vector, c
m,kmean k the prototype vector that output node is corresponding, 1≤k≤K
m; I(c
m,k, X
m) expression training sample set X
min mapped 5 to output node c
m, knumber of samples, I (c
m,k, X
m) with following formula, express:
I(c
m,k,X
m)={x|x∈X
m,BMU(x)=c
m,k},
SOM
mthe significance level of k prototype vector, can measure with following formula:
ρ(c
m,k)=I(c
m,k,X
m),
Wherein, for
be the training dataset of classification m, wherein S
mfor X
mthe number of middle sample, utilize following functional expression to be estimated the probability density of classification m:
Wherein, the dimension that d is training sample is the input node number d of Self-organizing Maps, and σ is smoothing factor, and this smoothing factor is expressed as follows:
Final probabilistic neural network is expressed as:
Wherein, the classification sum that M is the initial training collection, p
mbe the prior probability of m class training dataset, its span is: 0~1;
In the decision phase, new samples x is classified into classification m according to the formula following formula
*:
Further, in embodiment, in described initial learn step, with the batch learning algorithm, train Self-organizing Maps, making the input node number of Self-organizing Maps is d, corresponding to the input dimension of input pattern, be also d, the number of Self-organizing Maps output neuron is K, is expressed as
each output neuron has the prototype vector w of a d dimension
k∈ R
dwith d input neuron, be connected, wherein R
drefer to the input space of d dimension, the training process of this batch learning algorithm is as follows:
(a) by described original sample X=X
1∪ X
1∪ ... X
m∪ X
minterior all samples are according to the prototype vector collection of Self-organizing Maps
be divided into corresponding Voronoi zone
also: if sample
best match unit be output neuron k, so, sample x is divided into Voronoi zone V
k;
(b) make n
jfor being divided into Voronoi zone V
jin number of samples, utilize following formula to calculate the average of these samples
X wherein
p∈ V
j, 1≤p≤n
j;
(c) upgrade prototype vector:
Repeat above-mentioned three steps (a), (b), (c), until meet the iterations of training termination condition, reaching appointment.
Further in embodiment, for new training dataset X
new, note label (X
new) be new training dataset X
newthe class mark of middle sample, if label is (X
new) in the probabilistic neural network built, exist, be designated as label (X
new)=m;
Next, use new training dataset X
newm Self-organizing Maps carried out to incremental learning:
be combined into new training set, train a new Self-organizing Maps, replace original m Self-organizing Maps with new Self-organizing Maps, wherein
set for prototype vector corresponding to a described k output node;
The prototype vector collection of the new Self-organizing Maps that training obtains is designated as
k ' wherein
mit is the number of new Self-organizing Maps mesarcs vector;
The K ' newly obtained
mthe significance level of individual prototype vector is upgraded as follows:
Wherein, 0<β<1st, inherit the factor.
Further in embodiment, for new training dataset X
new, note label (X
new) be new training dataset X
newthe class mark of middle sample, note label (X
new)=m
new; If m
newdescribed built probabilistic neural network in do not occur, use X
newtrain a new Self-organizing Maps, and using the prototype vector of this new Self-organizing Maps as classification m
newpattern-neuron, join in described probabilistic neural network the structure that participates in probabilistic neural network, realize the study of increment type.
The accompanying drawing explanation
Fig. 1 is the learning process schematic diagram of the incremental learning method of fusion Self-organizing Maps and probabilistic neural network.
Embodiment
In order more to understand technology contents of the present invention, especially exemplified by specific embodiment and coordinate appended graphic being described as follows.
As shown in Figure 1, according to preferred embodiment of the present invention, merge the incremental learning method of Self-organizing Maps and probabilistic neural network, be suitable for the incremental learning to dissimilar new data available, the method comprises the following steps:
Initial learn: utilize Self-organizing Maps to extract the sample distribution rule from original sample.Original sample, according to the classification under sample, is divided into to a plurality of training datasets; Then use the training dataset training of each classification to obtain an independently Self-organizing Maps;
Build probabilistic neural network: with the prototype vector of Self-organizing Maps after training, as pattern-neuron, build probabilistic neural network; And
The study of new data set:
If the data set that new data set is known class, search the Self-organizing Maps of this known class and carry out local regularized learning algorithm and obtain new Self-organizing Maps, then replace the Self-organizing Maps of original this known class with new Self-organizing Maps;
If new data set is not the data set of known class, newly trains an independently Self-organizing Maps, and use its prototype vector as such other pattern-neuron, for the structure of probabilistic neural network.
Wherein, Self-organizing Maps (Self-Organizing Map, SOM) has the ability of adaptive learning data distribution character, can in output region, retain the topological structure relation of input pattern.Self-organizing Maps is comprised of two-layer, and the one, input layer, another is output layer.
Output layer forms network by some output nodes, can be one dimension or two dimension.Input layer and output layer node are entirely interconnected, complete the study of Self-organizing Maps by competition mechanism.
Self-organizing Maps after study can guarantee similar input, and the output obtained is also similar.Making the input node number of Self-organizing Maps is d, corresponding to the input dimension of input pattern, is also d, and the number of Self-organizing Maps output neuron is K, is expressed as
each output neuron has the prototype vector w of a d dimension
k∈ R
dwith d input neuron, be connected, wherein R
drefer to the input space of d dimension.
Self-organizing Maps can with the Sequence Learning algorithm or learning algorithm be trained in batches.In the present embodiment, when training dataset is larger, preferentially select learning algorithm in batches.The training process of learning algorithm is as follows in batches:
(a) by training dataset X=X
1∪ X
1∪ ... X
m∪ X
m, also be all samples in original sample prototype vector collection according to Self-organizing Maps
be divided into corresponding Voronoi zone
that is to say, if
best match unit be output neuron k, so, sample x is divided into Voronoi zone V
k.
(b) make n
jfor being divided into Voronoi zone V
jin number of samples, calculate the average of these samples, be shown below:
X wherein
p∈ V
j, 1≤p≤n
j.
(c) upgrade prototype vector:
Repeat above-mentioned three steps (a), (b), (c), until meet the iterations of training termination condition, reaching appointment.
Probabilistic neural network (Probabilistic Neural Network, PNN), be derived from the probability density estimator based on the Parzen window.Order
for the training dataset of classification m, wherein S
mfor X
mthe number of middle sample.So, can to the probability density function of classification m, be estimated with following formula:
Wherein, the dimension that d is training sample, σ is smoothing factor, can use formula (4) to be estimated:
At prediction/sorting phase, when formula (5) is set up, new samples x is classified into classification m
*,
Wherein, p
mbe the prior probability of classification m, M is total classification number.
Below in conjunction with above-mentioned Self-organizing Maps and probabilistic neural network, illustrate the fusion Self-organizing Maps of the present embodiment and principle and the implementation procedure of the incremental learning method (IMSOMPNN) of general neural network.
In this learning method, adopt many Self-organizing Maps strategy, that is to say the sample standard deviation training Self-organizing Maps more than into each class; Then using the prototype vector of the many Self-organizing Maps of each class as such pattern-neuron, for the subsequent builds probabilistic neural network.Use this kind of modular many Self-organizing Maps strategy, can realize easily the incremental learning to two kinds of dissimilar new datas, and without whole model is trained again.
At first, for the new data training set (New Updating Data) of model known class, only the Self-organizing Maps of corresponding classification need to be carried out to local regularized learning algorithm, and other Self-organizing Maps is without adjusting.
Secondly, for the data training set (New Class Data) of new classification, institute is not the training set of known class exactly, and the new Self-organizing Maps of one of stand-alone training, then join probabilistic neural network by its prototype vector and get final product.
Finally, the incremental learning method of the present embodiment has stronger anti-noise ability, this ability benefits from the denoising ability of Self-organizing Maps and the application mode of many Self-organizing Maps strategy, rather than Self-organizing Maps is trained and generated to the training set with an integral body as traditional approach.
Shown in figure 1, in the present embodiment, using the protein training dataset as original sample, the incremental learning method (IMSOMPNN) that merges Self-organizing Maps and probabilistic neural network comprises three learning processes:
A. the study of initial model
Utilize Self-organizing Maps to extract the sample distribution rule from original sample, original sample, according to the classification under sample, is divided into to a plurality of training datasets; Then use the training dataset training of each classification to obtain an independently Self-organizing Maps.Particularly, make X=X
1∪ X
1∪ ... X
m∪ X
mfor initial training collection, wherein X
mtraining dataset for classification m.At first, use each X
mtrain a Self-organizing Maps, be expressed as SOM
m.Use K
mmean SOM
m, the number of mesarcs vector, c
m,kmean k the prototype vector that output node is corresponding, 1≤k≤K
m.I(c
m,k, X
m) expression training sample set X
min be mapped to output node c
m,knumber of samples, by following formula, mean:
I(c
m,k,X
m)={x|x∈X
m,BMU(x)=c
m,k}
SOM
mthe significance level of k prototype vector, can measure with following formula:
ρ(c
m,k)=I(c
m,k,X
m),
Wherein, for
be the training dataset of classification m, wherein S
mfor X
mthe number of middle sample, utilize following functional expression to be estimated the probability density of classification m:
Wherein, the dimension that d is training sample is the input node number d of Self-organizing Maps, and σ is smoothing factor, and this smoothing factor is expressed as follows:
Again in conjunction with the significance level of each prototype vector,
further describe for:
Final probabilistic neural network is expressed as:
Wherein, the classification sum that M is the initial training collection, p
mbe the prior probability of m class training dataset, its span is: 0~1;
In the decision phase, new samples x is classified into classification m according to the formula following formula
*:
If, after initial PNN training, new data X is arranged again
newarrive.Without loss of generality, if X
newin data hold identical class mark, belong to same classification, even work as X
newin while comprising different classes of data, it can be divided into to some disjoint subsets, make the data in each subset belong to same classification.
Below investigate in two kinds of situation and how to carry out incremental learning.
B. the new data of known class (New Updating Data)
For new training dataset X
new, note label (X
new) be new training dataset X
newthe class mark of middle sample, if label is (X
new) built probabilistic neural network in exist, be designated as label (X
new)=m;
Next, use new training dataset X
newm Self-organizing Maps carried out to incremental learning:
be combined into new training set, train a new Self-organizing Maps, replace original m Self-organizing Maps with new Self-organizing Maps, wherein
set for prototype vector corresponding to a described k output node;
The prototype vector collection of the new Self-organizing Maps that training obtains is designated as
k ' wherein
mit is the number of new Self-organizing Maps mesarcs vector;
The K ' newly obtained
mthe significance level of individual prototype vector is upgraded as follows:
Wherein, 0<β<1st, inherit the factor.
C. the data (New Class Data) of new classification
For new training dataset X
new, note label (X
new) be new training dataset X
newthe class mark of middle sample, note label (X
new)=m
newif, m
newdescribed built probabilistic neural network in do not occur, use X
newtrain a new Self-organizing Maps, and using the prototype vector of this new Self-organizing Maps as classification m
newpattern-neuron, join in described probabilistic neural network the structure that participates in probabilistic neural network, realize the study of increment type.
In sum, the incremental learning method of fusion Self-organizing Maps provided by the invention and probabilistic neural network, can overcome traditional machine learning algorithm usually the data set based on static construct decision model, and can not effectively utilize the defect that lies in the knowledge in new data available, its beneficial effect is: (1) has used the prototype vector of Self-organizing Maps to build probabilistic neural network as pattern-neuron, makes that model structure is compacted, computing velocity is fast, anti-noise; (2) use the strategy of many Self-organizing Maps, carry out modularized processing, only need to carry out part for the new datas of two types adjusts or newly trains Self-organizing Maps to join casing in PNN again, simplified computation process, can carry out incremental learning to dissimilar new data easily, be suitable for applying under the mass data situation.
Although the present invention discloses as above with preferred embodiment, so it is not in order to limit the present invention.The persond having ordinary knowledge in the technical field of the present invention, without departing from the spirit and scope of the present invention, when being used for a variety of modifications and variations.Therefore, protection scope of the present invention is as the criterion when looking claims person of defining.
Claims (5)
1. an incremental learning method that merges Self-organizing Maps and probabilistic neural network, be suitable for the incremental learning to dissimilar new data available, it is characterized in that, the method comprises the following steps:
Initial learn: utilize Self-organizing Maps to extract the sample distribution rule from original sample, original sample, according to the classification under sample, is divided into to a plurality of training datasets; Then use the training dataset training of each classification to obtain an independently Self-organizing Maps;
Build probabilistic neural network: with the prototype vector of Self-organizing Maps after training, as pattern-neuron, build probabilistic neural network; And
The study of new data set comprises:
1) if the data set that new data set is known class is searched the Self-organizing Maps of this known class and carried out local regularized learning algorithm and obtains new Self-organizing Maps, then replace the Self-organizing Maps of original this known class with new Self-organizing Maps; And
2) if new data set is not the data set of known class, newly train an independently Self-organizing Maps, and use its prototype vector as such other pattern-neuron, for the structure of probabilistic neural network.
2. the incremental learning method of fusion Self-organizing Maps according to claim 1 and probabilistic neural network, is characterized in that, the process of described initial learn is as follows:
Make original sample X=X
1∪ X
1∪ ... X
m∪ X
mfor initial training collection, wherein X
mfor the training dataset of classification m, at first, use each X
mtrain a Self-organizing Maps, be expressed as SOM
m; Use K
mmean SOM
m, the number of mesarcs vector, c
m,kmean k the prototype vector that output node is corresponding, 1≤k≤K
m; I(c
m,k, X
m) expression training sample set X
min be mapped to output node c
m,knumber of samples, I (c
m,k, X
m) with following formula, express:
I(c
m,k,X
m)={x|x∈X
m,BMU(x)=c
m,k},
SOM
mthe significance level of k prototype vector, can measure with following formula:
ρ(c
m,k)=I(c
m,k,X
m),
Use SOM
min prototype vector, estimate the probability density of classification m
be expressed as:
Wherein, for
be the training dataset of classification m, wherein S
mfor X
mthe number of middle sample, utilize following functional expression to be estimated the probability density of classification m:
Wherein, the dimension that d is training sample is the input node number d of Self-organizing Maps, and σ is smoothing factor, and this smoothing factor is expressed as follows:
Final probabilistic neural network is expressed as:
Wherein, the classification sum that M is the initial training collection, p
mbe the prior probability of m class training dataset, its span is: 0~1;
In the decision phase, new samples x is classified into classification m according to the formula following formula
*:
3. the incremental learning method of fusion Self-organizing Maps according to claim 2 and probabilistic neural network, it is characterized in that, in described initial learn step, train Self-organizing Maps with the batch learning algorithm, making the input node number of Self-organizing Maps is d, corresponding to the input dimension of input pattern, be also d, the number of Self-organizing Maps output neuron is K, is expressed as
each output neuron has the prototype vector w of a d dimension
k∈ R
dwith d input neuron, be connected, wherein R
drefer to the input space of d dimension, the training process of this batch learning algorithm is as follows:
(a) by described original sample X=X
1∪ X
1∪ ... X
m∪ X
minterior all samples are according to the prototype vector collection of Self-organizing Maps
be divided into corresponding Voronoi zone
also: if sample
best match unit be output neuron k, so, sample x is divided into Voronoi zone V
k;
(b) make n
jfor being divided into Voronoi zone V
jin number of samples, utilize following formula to calculate the average of these samples
X wherein
p∈ V
j, 1≤p≤n
j;
(c) upgrade prototype vector:
Repeat above-mentioned three steps (a), (b), (c), until meet the iterations of training termination condition, reaching appointment.
4. the incremental learning method of fusion Self-organizing Maps according to claim 2 and probabilistic neural network, is characterized in that, for new training dataset X
new, note label (X
new) be new training dataset X
newthe class mark of middle sample, if label is (X
new) in the probabilistic neural network built, exist, be designated as label (X
new)=m;
Next, use new training dataset X
newm Self-organizing Maps carried out to incremental learning:
be combined into new training set, train a new Self-organizing Maps, replace original m Self-organizing Maps with new Self-organizing Maps, wherein
set for prototype vector corresponding to a described k output node;
The prototype vector collection of the new Self-organizing Maps that training obtains is designated as
k ' wherein
mit is the number of new Self-organizing Maps mesarcs vector;
The K ' newly obtained
mthe significance level of individual prototype vector is upgraded as follows:
Wherein, 0<β<1st, inherit the factor.
5. the incremental learning method of fusion Self-organizing Maps according to claim 4 and probabilistic neural network, is characterized in that, for new training dataset X
new, note label (X
new) be new training dataset X
newthe class mark of middle sample, note label (X
new)=m
new; If m
newdescribed built probabilistic neural network in do not occur, use X
newtrain a new Self-organizing Maps, and using the prototype vector of this new Self-organizing Maps as classification m
newpattern-neuron, join in described probabilistic neural network the structure that participates in probabilistic neural network, realize the study of increment type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310451473.9A CN103489033A (en) | 2013-09-27 | 2013-09-27 | Incremental type learning method integrating self-organizing mapping and probability neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310451473.9A CN103489033A (en) | 2013-09-27 | 2013-09-27 | Incremental type learning method integrating self-organizing mapping and probability neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103489033A true CN103489033A (en) | 2014-01-01 |
Family
ID=49829238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310451473.9A Pending CN103489033A (en) | 2013-09-27 | 2013-09-27 | Incremental type learning method integrating self-organizing mapping and probability neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103489033A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105530702A (en) * | 2016-01-25 | 2016-04-27 | 杭州电子科技大学 | Wireless sensing network mobile node positioning method based on self-organizing mapping |
CN105653521A (en) * | 2016-01-15 | 2016-06-08 | 杭州数梦工场科技有限公司 | Data verification method and device |
WO2016145676A1 (en) * | 2015-03-13 | 2016-09-22 | 中国科学院声学研究所 | Big data processing method based on deep learning model satisfying k-degree sparse constraint |
CN107220671A (en) * | 2017-05-27 | 2017-09-29 | 重庆大学 | A kind of Artificial Olfactory on-line correction sample generating method based on self organization map |
CN108027889A (en) * | 2016-01-25 | 2018-05-11 | 华为技术有限公司 | A kind of training, dispatching method and relevant device for incremental learning cloud system |
CN109119089A (en) * | 2018-06-05 | 2019-01-01 | 安克创新科技股份有限公司 | The method and apparatus of penetrating processing is carried out to music |
CN109872162A (en) * | 2018-11-21 | 2019-06-11 | 阿里巴巴集团控股有限公司 | A kind of air control classifying identification method and system handling customer complaint information |
CN110070188A (en) * | 2019-04-30 | 2019-07-30 | 山东大学 | A kind of increment type cognitive development system and method merging interactive intensified learning |
CN110188378A (en) * | 2019-04-12 | 2019-08-30 | 浙江大学 | A kind of aerodynamic data fusion method neural network based |
CN110598837A (en) * | 2018-06-13 | 2019-12-20 | 北京深鉴智能科技有限公司 | Artificial neural network adjusting method and device |
CN111062494A (en) * | 2019-12-26 | 2020-04-24 | 山东大学 | Robot self-organization-thinking-reversal cognitive development method and system with lifelong learning ability |
CN111271183A (en) * | 2020-02-26 | 2020-06-12 | 重庆红江机械有限责任公司 | Method and system for self-adaptive online prediction of state of diesel engine |
CN112836791A (en) * | 2021-01-08 | 2021-05-25 | 北京闭环科技有限公司 | Non-countermeasure generation self-coding method and system based on dynamic surface segmentation |
CN113344215A (en) * | 2021-06-01 | 2021-09-03 | 山东大学 | Extensible cognitive development method and system supporting new mode online learning |
CN115438755A (en) * | 2022-11-08 | 2022-12-06 | 腾讯科技(深圳)有限公司 | Incremental training method and device of classification model and computer equipment |
-
2013
- 2013-09-27 CN CN201310451473.9A patent/CN103489033A/en active Pending
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016145676A1 (en) * | 2015-03-13 | 2016-09-22 | 中国科学院声学研究所 | Big data processing method based on deep learning model satisfying k-degree sparse constraint |
US11048998B2 (en) | 2015-03-13 | 2021-06-29 | Institute Of Acoustics, Chinese Academy Of Sciences | Big data processing method based on deep learning model satisfying k-degree sparse constraint |
CN105653521A (en) * | 2016-01-15 | 2016-06-08 | 杭州数梦工场科技有限公司 | Data verification method and device |
CN105530702A (en) * | 2016-01-25 | 2016-04-27 | 杭州电子科技大学 | Wireless sensing network mobile node positioning method based on self-organizing mapping |
CN108027889A (en) * | 2016-01-25 | 2018-05-11 | 华为技术有限公司 | A kind of training, dispatching method and relevant device for incremental learning cloud system |
CN108027889B (en) * | 2016-01-25 | 2020-07-28 | 华为技术有限公司 | Training and scheduling method for incremental learning cloud system and related equipment |
CN107220671A (en) * | 2017-05-27 | 2017-09-29 | 重庆大学 | A kind of Artificial Olfactory on-line correction sample generating method based on self organization map |
CN107220671B (en) * | 2017-05-27 | 2020-07-14 | 重庆大学 | Artificial olfaction system online correction sample generation method based on self-organizing map |
CN109119089A (en) * | 2018-06-05 | 2019-01-01 | 安克创新科技股份有限公司 | The method and apparatus of penetrating processing is carried out to music |
CN110598837A (en) * | 2018-06-13 | 2019-12-20 | 北京深鉴智能科技有限公司 | Artificial neural network adjusting method and device |
CN109872162A (en) * | 2018-11-21 | 2019-06-11 | 阿里巴巴集团控股有限公司 | A kind of air control classifying identification method and system handling customer complaint information |
CN110188378A (en) * | 2019-04-12 | 2019-08-30 | 浙江大学 | A kind of aerodynamic data fusion method neural network based |
CN110070188B (en) * | 2019-04-30 | 2021-03-30 | 山东大学 | Incremental cognitive development system and method integrating interactive reinforcement learning |
CN110070188A (en) * | 2019-04-30 | 2019-07-30 | 山东大学 | A kind of increment type cognitive development system and method merging interactive intensified learning |
CN111062494A (en) * | 2019-12-26 | 2020-04-24 | 山东大学 | Robot self-organization-thinking-reversal cognitive development method and system with lifelong learning ability |
CN111062494B (en) * | 2019-12-26 | 2023-06-16 | 山东大学 | Robot self-organizing-thinking-back cognitive development method and system with life learning capability |
CN111271183A (en) * | 2020-02-26 | 2020-06-12 | 重庆红江机械有限责任公司 | Method and system for self-adaptive online prediction of state of diesel engine |
CN111271183B (en) * | 2020-02-26 | 2022-08-16 | 重庆红江机械有限责任公司 | Method and system for self-adaptive online prediction of state of diesel engine |
CN112836791A (en) * | 2021-01-08 | 2021-05-25 | 北京闭环科技有限公司 | Non-countermeasure generation self-coding method and system based on dynamic surface segmentation |
CN112836791B (en) * | 2021-01-08 | 2024-02-09 | 北京航轨智行科技有限公司 | Non-countermeasure generation self-coding method and system based on dynamic surface segmentation |
CN113344215A (en) * | 2021-06-01 | 2021-09-03 | 山东大学 | Extensible cognitive development method and system supporting new mode online learning |
CN115438755A (en) * | 2022-11-08 | 2022-12-06 | 腾讯科技(深圳)有限公司 | Incremental training method and device of classification model and computer equipment |
CN115438755B (en) * | 2022-11-08 | 2024-04-02 | 腾讯科技(深圳)有限公司 | Incremental training method and device for classification model and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103489033A (en) | Incremental type learning method integrating self-organizing mapping and probability neural network | |
CN109271522B (en) | Comment emotion classification method and system based on deep hybrid model transfer learning | |
Sharma et al. | Machine learning techniques for data mining: A survey | |
Pineda-Jaramillo | A review of Machine Learning (ML) algorithms used for modeling travel mode choice | |
Vieira et al. | Two cooperative ant colonies for feature selection using fuzzy models | |
CN106951825A (en) | A kind of quality of human face image assessment system and implementation method | |
CN104573630B (en) | Multiclass brain power mode ONLINE RECOGNITION method based on double SVMs probability outputs | |
Han et al. | A combined online-learning model with K-means clustering and GRU neural networks for trajectory prediction | |
CN107578061A (en) | Based on the imbalanced data classification issue method for minimizing loss study | |
CN107944559A (en) | A kind of entity relationship automatic identifying method and system | |
Huang et al. | A graph neural network-based node classification model on class-imbalanced graph data | |
Chadha et al. | An improved K-means clustering algorithm: a step forward for removal of dependency on K | |
CN111476261A (en) | Community-enhanced graph convolution neural network method | |
CN103593855A (en) | Clustered image splitting method based on particle swarm optimization and spatial distance measurement | |
Ma et al. | LabelForest: Non-parametric semi-supervised learning for activity recognition | |
Colliri et al. | A network-based high level data classification technique | |
Xie et al. | CNN and KPCA-based automated feature extraction for real time driving pattern recognition | |
Zhang et al. | Node features adjusted stochastic block model | |
Kim et al. | Knowledge extraction and representation using quantum mechanics and intelligent models | |
Cupertino et al. | Network-based supervised data classification by using an heuristic of ease of access | |
CN113989544A (en) | Group discovery method based on deep map convolution network | |
Raikar et al. | Efficiency comparison of supervised and unsupervised classifier on content based classification using shape, color, texture | |
Li et al. | Industrial image classification using a randomized neural-net ensemble and feedback mechanism | |
CN105760471A (en) | Classification method for two types of texts based on multiconlitron | |
CN115456093A (en) | High-performance graph clustering method based on attention-graph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140101 |