CN110298434A - A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED - Google Patents

A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED Download PDF

Info

Publication number
CN110298434A
CN110298434A CN201910445367.7A CN201910445367A CN110298434A CN 110298434 A CN110298434 A CN 110298434A CN 201910445367 A CN201910445367 A CN 201910445367A CN 110298434 A CN110298434 A CN 110298434A
Authority
CN
China
Prior art keywords
fuzzy
dbn
model
division
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910445367.7A
Other languages
Chinese (zh)
Other versions
CN110298434B (en
Inventor
蒋云良
张雄涛
胡文军
颜成钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huzhou University
Original Assignee
Huzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huzhou University filed Critical Huzhou University
Priority to CN201910445367.7A priority Critical patent/CN110298434B/en
Publication of CN110298434A publication Critical patent/CN110298434A/en
Application granted granted Critical
Publication of CN110298434B publication Critical patent/CN110298434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Training dataset is divided into K subset successively the following steps are included: a) utilizing fuzzy clustering algorithm FCM by the integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED that the invention discloses a kind of;B) DBN model that different structure is respectively adopted in each subset is modeled, and every layer of Hidden nodes are different in each DBN submodel, thus constitutes K DBN model, each Model Independent parallel training;C) each model acquired results are subjected to FUZZY WEIGHTED and form final output.The disadvantages of algorithm can effectively and quickly solve the classification problem of big-sample data, and time complexity is higher when overcoming single DBN for data classification, moreover, FE-DBN can have many advantages, such as that nicety of grading is high to avoid overfitting problem.

Description

A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED
[technical field]
It is especially a kind of to be added based on fuzzy division with fuzzy the present invention relates to the technical field of fuzzy diagnosis and machine learning The integrated deepness belief network of power.
[background technique]
In recent years, deep learning made breakthrough progress in image recognition and field of speech recognition.Deep learning by Gradually become most hot one of the research direction of machine learning.RBM (Restricted Boltzmann Machine, RBM) is due to table The structural unit that the advantages that Danone power is strong, is easy to reasoning is used successfully as deep neural network uses.It currently, is basic with RBM Constituting the models such as DBN (deepness belief network), the DBM (depth Boltzmann machine) of module is considered as most effective deep learning Algorithm.Wherein deepness belief network (DBN) is the Typical Representative of deep learning, and usual DBN is carrying out image and voice isotype There is higher precision in identification, but the complexity of one DBN of training is very high, because DBN has used BP calculation in the fine tuning stage Method, this algorithm are difficult to accomplish multi-host parallel, so it is extremely difficult to carry out indoctrination session in large-scale data.To sum up, DBN master There are problems that two, 1) time complexity of one DBN of training is still higher;2) achieve the effect that usually require more Hidden nodes, however it is also easy to produce over-fitting etc. again when Hidden nodes are more.Although Deng Li et al. passes through the network knot of improvement DBN Structure promotes its performance, but still without breaking through problem above.
Classification is the key problem of deep learning, improve classifier classification performance be Research on classifier main target it One.Fuzzy theory is combined with classifier, for handling uncertain problem.When constructing fuzzy classification model, one important Task be that the mode input space is divided into multiple fuzzy regions or Fuzzy subspaee, i.e. fuzzy division, lead under normal conditions Will be there are three types of division methods, trellis division, tree-shaped division and bulk divide.It is to make the input space of every dimension that trellis, which divides, It divides, acquires its fuzzy set, further according to fuzzy system theory, fuzzy set is mapped to fuzzy region.Appearance is compared in trellis division The problem of easily using, but rules explosion occurring when input feature vector number is very big.This phenomenon and dimension disaster are more similar. Tree-shaped division, primary to generate a division corresponding with fuzzy region, dividing surface will be produced by often doing primary divide. Although the problem of tree-shaped division can be avoided rules explosion, not easy to use, need to find a conjunction using heuristic rule Suitable tree, therefore be difficult to design an optimal tree-shaped division.Bulk divides, and is to divide the data of input and output Analysis, the input space of pre-generated analog result is divided with fuzzy region, each fuzzy region can be described input and output number According to behavior.The advantages of division is a kind of more flexible division methods, absorbs first two method, while having abandoned them Existing deficiency.
For the ability to express of better excavating depth model, the precision of DBN is further increased in practical applications and is added It the training time of fast DBN, is inspired from above-mentioned thought, it is proposed that a kind of integrated depth based on fuzzy division and FUZZY WEIGHTED Belief network.
[summary of the invention]
The object of the invention is to solve the problems of the prior art, propose a kind of based on fuzzy division and FUZZY WEIGHTED Integrated deepness belief network, for handling the classification problem of data, the classification that can effectively and quickly solve big-sample data is asked Topic, time complexity higher disadvantage when overcoming single DBN for data classification.
To achieve the above object, the invention proposes a kind of integrated depth conviction net based on fuzzy division and FUZZY WEIGHTED Network, successively the following steps are included:
A) fuzzy clustering algorithm FCM is utilized, training dataset is divided into K subset;
B) DBN model that different structure is respectively adopted in each subset is modeled, every layer of hidden node in each DBN submodel Number is different, thus constitutes K DBN model, each Model Independent parallel training;
C) each model acquired results are subjected to FUZZY WEIGHTED and form final output.
Preferably, the step a) carries out fuzzy grouping to training dataset using fuzzy clustering algorithm FCM;It utilizes FCM algorithm carries out fuzzy clustering, the objective function of FCM are as follows:
Wherein, K is to divide number, and N is sample number, υi=(υi1,...,υid) be the i-th class central point, μijIt indicates j-th Sample belongs to the degree of membership of the i-th class, and m is Fuzzy Exponential, meets m >=2, xjIndicate j-th of sample point;
It introduces Lagrange factor and constructs new objective function, the iterative calculation for being derived by degree of membership and cluster centre is public Formula are as follows:
According to above-mentioned two formula, after iteration ends, Subject Matrix U obtained just obtains space division after de-fuzzy Matrix;
Calculate width:
Fuzzy division is carried out to training dataset according to the value of cluster centre and width, and using following formula:
S=1,2 ..., q, j=1,2 ..., K,
WhereinFor the dividing subset of definition, q is dimension, and ξ is overlap factor, and ξ is bigger, and subset division is also fuzzyyer.
Preferably, the step b) calls hinton DBN algorithm, it is parallel to run.
Preferably, giving test data x after each submodel of step c) trainsi, the data are calculated and exist The output of each model is as a result, calculate weight using Triangleshape grade of membership function:
Sample space is divided, each classifier carries out operation in sample subspace, and sample has office in classifier Portion's classification performance is best, corresponding to weight it is bigger;
Finally, each DBN classifier acquired results are carried out FUZZY WEIGHTED
Wherein,For sample xiIn the classification results of k-th of model, LCM is local disaggregated model, For resulting final output after K category of model result FUZZY WEIGHTED.
Beneficial effects of the present invention: the integrated depth conviction based on fuzzy division and FUZZY WEIGHTED that the invention proposes a kind of Corresponding Ensemble classifier algorithm is named as FE-DBN by network.Training data is divided by fuzzy clustering algorithm FCM first Then multiple subsets are respectively trained each subset using the DBN of multiple and different structures parallel, finally use for reference fuzzy set The result of each classifier is carried out FUZZY WEIGHTED by theoretical thought.The algorithm can effectively and quickly solve big-sample data Classification problem, the disadvantages of time complexity is higher when overcoming single DBN for data classification, moreover, FE-DBN can be kept away Exempt from overfitting problem, has many advantages, such as that nicety of grading is high.
Feature and advantage of the invention will be described in detail by embodiment combination attached drawing.
[Detailed description of the invention]
Fig. 1 is RBM model schematic;
Fig. 2 is the structural block diagram of DBN;
Fig. 3 is the structural block diagram of FE-DBN of the present invention;
Fig. 4 is fuzzy division schematic diagram;
Fig. 5 is the schematic diagram of artificial data collection, a) screw type, b) Gaussian.
[specific embodiment]
Limited Boltzmann machine is a kind of production random network proposed by Hinton and Sejnowski in 1986, The network is a kind of probability graph model based on energy, it is made of a visible layer and a hidden layer, as shown in Figure 1, v and H respectively indicates visible layer and hidden layer, and W indicates the connection weight between two layers.For visible layer and hidden layer, connection relationship It is connected entirely for interlayer, it is connectionless in layer.H has m node in figure, and v has n node, individual node viAnd hjDescription.Visible layer For observing data, hidden layer is for extracting feature.The hidden unit and visible element of RBM can be arbitrary exponential family unit. The present invention only discusses that all visible layer and Hidden unit are Bernoulli Jacob's distribution, it is assumed that all visible elements and hidden unit are equal For two-valued variable, i.e., pair
RBM is a kind of energy model, as shown in Figure 1, energy function is defined as follows:
θ∈{b,c,W}
Wherein, b and c is respectively the bias vector of display layer and hidden layer, and W indicates weight matrix.It, can based on energy function To obtain the joint probability distribution of v and h:
Wherein, Z function is normalizing item.
The combination of multiple RBM stacking-types constitutes DBN, input of the output of previous RBM as the latter RBM.Such as figure Shown in 2, the bottom is input layer, and top is output layer, and middle layer is hidden layer.The study of DBN includes two stages: pre- instruction Practice and finely tunes.Pre-training is successively trained in greedy unsupervised mode, by input layer be mapped to output layer to Learn to complicated nonlinear function;Fine tuning is realized under the mode of supervision, it is using backpropagation (BP) algorithm from most Top layer is finely adjusted entire DBN network parameter to the bottom.
Although DBN has powerful knowledge representation ability, when handling large-scale data even big data, DBN exists The fine tuning stage requires a great deal of time training pattern, causes the training time especially long in this way.
The present invention proposes a kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED, successively includes following step It is rapid:
A) fuzzy clustering algorithm FCM is utilized, training dataset is divided into K subset;
B) DBN model that different structure is respectively adopted in each subset is modeled, every layer of hidden node in each DBN submodel Number is different, thus constitutes K DBN model, each Model Independent parallel training;
C) each model acquired results are subjected to FUZZY WEIGHTED and form final output.
Realization process is as follows:
As shown in figure 3, the workflow of FE-DBN is as follows: firstly, using fuzzy clustering algorithm FCM to training dataset into The fuzzy grouping of row;Fuzzy clustering, the objective function of FCM are carried out using FCM algorithm are as follows:
Wherein, K is to divide number, and N is sample number, υi=(υi1,...,υid) be the i-th class central point, μijIt indicates j-th Sample belongs to the degree of membership of the i-th class, and m is Fuzzy Exponential, meets m >=2, xjIndicate j-th of sample point;
It introduces Lagrange factor and constructs new objective function, the iterative calculation for being derived by degree of membership and cluster centre is public Formula are as follows:
According to above-mentioned two formula, after iteration ends, Subject Matrix U obtained just obtains space division after de-fuzzy Matrix;
Calculate width:
Fuzzy division is carried out to training dataset according to the value of cluster centre and width, and using following formula:
S=1,2 ..., q, j=1,2 ..., K,
WhereinFor the dividing subset of definition, q is dimension, and ξ is overlap factor, and ξ is bigger, and subset division is also fuzzyyer, Fuzzy division schematic diagram is as shown in figure 4, after the completion of to original data set fuzzy division, the parallel training difference knot in each subset The DBN of structure.
In the formula (2) mentioned, it is preferred that emphasis is marginal probability distribution P determined by joint probability distribution (v | θ), due to It is connectionless in RBM model layer, therefore when the state of given visible element, the state of activation of each hidden unit is conditional sampling. At this point, the activation probability of j-th of hidden unit are as follows:
WhereinFor sigmoid activation primitive.The activation probability of i-th of visible element are as follows:
RBM carries out parameter learning using CD-k (to the sdpecific dispersion) algorithm that Hinton is proposed, and proves, trains sample when using This initialization v(0)When, it is only necessary to less sampling step number (general k=1) can be obtained by good approximation.Using CD-k algorithm, The replacement criteria of each parameter is as follows:
Wherein, ε is the learning rate of pre-training, < >dataFor the mathematics phase on distribution defined in training dataset It hopes, < >reconFor the expectation in distribution defined in the model after reconstruct.Using formula (18), update to obtain by iteration The parameter of each submodel of DBN.
After each submodel trains, test data x is giveni, the data are calculated in the output of each model as a result, Weight is calculated using Triangleshape grade of membership function:
Sample space is divided, each classifier carries out operation in sample subspace, and sample has office in classifier Portion's classification performance is best, corresponding to weight it is bigger;
Finally, each DBN classifier acquired results are carried out FUZZY WEIGHTED
Wherein,For sample xiIn the classification results of k-th of model, LCM is local disaggregated model,For resulting final output after K category of model result FUZZY WEIGHTED.
FE-DBN algorithm realizes that process is as follows:
1) initialize: the Hidden nodes and DBN of setting dividing subset number K and overlap factor ξ, each submodel DBN change For the period, W, the value of b, c, learning rate ε are initialized;
2) dividing subset: acquiring the central point and width of every cluster using fuzzy clustering algorithm FCM, according to formula (15) by source Data set is divided into K subset;
3) each submodel DBN of parallel training1~DBNK
For all visible elements, P (h is calculated using formula (16)j=1 | v, θ), and extract hjAnd hj∈ { 0,1 } is right In all hidden units, P (v is calculated using formula (17)i=1 | h, θ), and extract viAnd vi∈ { 0,1 }, more using formula (18) The value of new RBM parameter W, b, c, it may be assumed that
W=W+ Δ W;B=b+ Δ b;C=c+ Δ c
Step 3) is repeated, until meeting iteration cycle;
4) each test data is calculated to the degree of membership of each subset using formula (19-20), bring test data into step It in rapid 3) resulting K submodel and exports K classification results, carries out integrated final output using formula (21).
Experiment and analysis
Artificial data and UCI data will be utilized respectively to being mentioned based on fuzzy division and FUZZY WEIGHTED in experimental section Quick DBN sorting algorithm (FE-DBN) is verified and is assessed.And the performance of the algorithm is carried out with deepness belief network algorithm Compare.In order to verify the validity of algorithm FE-DBN proposed by the invention, the comparison algorithm of use has local disaggregated model DBNKWith global classification model DBN, wherein DBNKIt indicates original data set being divided into K subset, an office is constructed in each subset Portion's deepness belief network disaggregated model.All experimental results all use five foldings to intersect, and run ten times and take mean value.
S.1 experimental setup
S..1.1 data set
Artificial data collection generates two kinds, such as Fig. 5, a) screw type, b) Gaussian, generate 4000 samples, screw type 2 Class, 2 dimensions;4 class of Gaussian, 2 dimensions.The positive and negative class sample number each 2000 of the screw type data set of construction, the every class sample of Gaussian data set This number 1000, all kinds of center of Gaussian is respectively: [7 8], [15 13], [15 5], [23 8], covariance is equal are as follows: [4 0;0 4].Real data set is all from UCI.
1 artificial data collection of table
2 UCI data set of table
S.1.2 parameter setting and experiment running environment
Three layers of DBN, ξ is used to be used to control the flexible width of subset, find in an experiment in experiment, when ξ is 3, energy It is enough to obtain preferable as a result, be finely adjusted according to the distribution of specific data set.DBN code is referring to http: // Www.cs.toronto.edu/~hinton/, RBM iteration cycle maxepoch=20, for controlling the pre-training iteration of RBM The fine tuning number of number and model parameter.The learning rate epsilonw=0.05 of weight;The learning rate epsilonvb of aobvious layer biasing =0.05;The learning rate epsilonhb=0.05 of hidden layer biasing;Weigh loss coefficient weightcost=0.0002;Momentum study Rate initialmomentum=0.5, finalmomentum=0.9, the setting of hidden node number such as S.2.1 section data set experiment pair According to shown in table.
The present invention carries out algorithm performance degree using mean accuracy, mean square deviation, runing time (training time+testing time) Amount.
Experimental situation be intel (R) Core (TM) i3 3.40GHzCPU, 8G memory, Windows10 operating system, Matlab2016a。
S.2 experimental result and analysis:
It is further heuristic data collection fuzzy division number to the importance for promoting nicety of grading and Riming time of algorithm, originally Data set is divided into different subset numbers by invention, and carries out experiment comparison respectively using different Hidden nodes combinations.Such as Shown in table 3, local disaggregated model DBNKThere are 3 subsets and 4 and subset respectively, " 28+22+19 " indicates DBN1Middle first layer, second Layer, the Hidden nodes of third layer are respectively 28,22,19.
S.2.1 artificial data collection
The experimental section is mainly the validity that FE-DBN algorithm proposed in this paper is verified by constructing analog data set. From table 3 to the experimental result of table 4 it can be seen that helix data set poorly divides, precision is not high, but FE-DBN is still mentioned It rises;Gaussian data set precision, FE-DBN is than each partial model DBNKSlightly higher and world model DBN remains basically stable, because it is smart Degree is very high, therefore is difficult have biggish promotion again.
Nicety of grading and runing time of the table 3 on Swiss data set
Nicety of grading and runing time of the table 4 on Gauss data set
S.2.1 UCI data set
The UCI data set of this part experiment choosing, existing medium-scale data, and have large-scale data, existing two classification, There are more classification, the experimental results are shown inthe following table:
Nicety of grading and runing time of the table 5 on Adult data set
Nicety of grading and runing time of the table 6 on Magic_gamma_telescope data set
Nicety of grading and runing time of the table 7 on pendigits data set
Nicety of grading and runing time of the table 8 on Waveform3 data set
Nicety of grading and runing time of the table 9 on shuttle data set
Three kinds of algorithms are in the contrast and experiment on each UCI data set as shown in table 3- table 9.From the experimental result of upper table, It follows that
1) compared on measuring accuracy with global classification model DBN, FE-DBN in data set Adult, shuttle and Increase more on Magic_gamma_telescope, has on data set pendigits and waveform3 and slightly float.In sample In the case that this dividing subset determines, FE-DBN is higher than any one local disaggregated model DBNK.As a whole, FE-DBN algorithm Classifying quality be optimal in three.From table it can also be seen that when the number of subsets of division determines, there are different hidden nodes Each local disaggregated model DBN that array is closedKThe precision and no significant difference of classifier.With the increase of dividing subset number, FE- The precision of DBN has the tendency that growth substantially on different data sets.Main cause essentially consists in, according to integration principle, for Integrated FE-DBN disaggregated model, increases the diversity of each submodel, can be improved the performance of integrated classifier.
2) it compares with world model DBN, each part disaggregated model needs less Hidden nodes in FE-DBN, just It can achieve higher precision, this is primarily due to, and is all Weak Classifier for forming each local classifiers of FE-DBN.
3) for all data sets, in terms of run time, when dividing subset number gradually increases, due to the sample of each subset This number is being reduced, and Hidden nodes are also being reduced, and runing time accordingly can also be reduced.Due to fuzzy division to be carried out and fuzzy set At the runing time of FE-DBN is than each local disaggregated model DBNKIt is more, but the runing time of FE-DBN is less than global mould The runing time of type DBN, to find out its cause, being primarily due in FE-DBN, each part disaggregated model is run parallel, and every The Hidden nodes of a submodel are respectively less than the Hidden nodes of world model DBN.
Either simulated data sets or UCI data set, the DBN integrated classifier based on fuzzy division and FUZZY WEIGHTED (FE-DBN) performance than single classifier (DBN) is good, than optimal local disaggregated model DBNKAlso want high.
By, as a result, being obtained according to statistical analysis, sample granularity of division is thinner, and nicety of grading can improve in table, show carefully to draw More sample characteristics information can be obtained by dividing.But the more of subset division are also not, precision is higher, and data set shuttle exists Dividing subset number obtains maximum value when being 4.
Solve the problems, such as that DBN training time complexity is high using integrated method.According to the affinity information between data Fuzzy grouping is carried out to data, constructs sample space subset, then training has different structure in each sample space subset DBN sub-classifier, the method for finally using FUZZY WEIGHTED, obtains final integrated classifier and classification results.By artificial Experiment on data set and UCI data set is preferably classified the results show that FE-DBN algorithm is available than other sorting algorithms As a result.
Above-described embodiment is the description of the invention, is not limitation of the invention, after any pair of simple transformation of the present invention Scheme all belong to the scope of protection of the present invention.

Claims (4)

1. a kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED, it is characterised in that: successively include following step It is rapid:
A) fuzzy clustering algorithm FCM is utilized, training dataset is divided into K subset;
B) DBN model that different structure is respectively adopted in each subset is modeled, and every layer of Hidden nodes are not in each DBN submodel Equally, K DBN model, each Model Independent parallel training are thus constituted;
C) each model acquired results are subjected to FUZZY WEIGHTED and form final output.
2. a kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED as described in claim 1, feature exist In: the step a) carries out fuzzy grouping to training dataset using fuzzy clustering algorithm FCM;It is obscured using FCM algorithm Cluster, the objective function of FCM are as follows:
Wherein, K is to divide number, and N is sample number, υi=(υi1,...,υid) be the i-th class central point, μijIndicate j-th of sample category In the degree of membership of the i-th class, m is Fuzzy Exponential, meets m >=2, xjIndicate j-th of sample point;
It introduces Lagrange factor and constructs new objective function, be derived by the iterative calculation formula of degree of membership and cluster centre Are as follows:
According to above-mentioned two formula, after iteration ends, Subject Matrix U obtained just obtains space after de-fuzzy and divides square Battle array;
Calculate width:
Fuzzy division is carried out to training dataset according to the value of cluster centre and width, and using following formula:
S=1,2 ..., q, j=1,2 ..., K,
Wherein θjFor the dividing subset of definition, q is dimension, and ξ is overlap factor, and ξ is bigger, and subset division is also fuzzyyer.
3. a kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED as described in claim 1, feature exist In: the step b) calls DBN algorithm, parallel to run.
4. a kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED as described in claim 1, feature exist In: after each submodel of step c) trains, give test data xi, the data are calculated in the output of each model As a result, calculating weight using Triangleshape grade of membership function:
For k=1,2 ..., K, i=1,2 ..., N, s=1,2 ..., q
Sample space is divided, each classifier carries out operation in sample subspace, and sample has part point in classifier Class performance is best, corresponding to weight it is bigger;
Finally, each DBN classifier acquired results are carried out FUZZY WEIGHTED
Wherein,For sample xiIn the classification results of k-th of model, LCM is local disaggregated model,For Resulting final output after K category of model result FUZZY WEIGHTED.
CN201910445367.7A 2019-05-27 2019-05-27 Integrated deep belief network based on fuzzy partition and fuzzy weighting Active CN110298434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910445367.7A CN110298434B (en) 2019-05-27 2019-05-27 Integrated deep belief network based on fuzzy partition and fuzzy weighting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910445367.7A CN110298434B (en) 2019-05-27 2019-05-27 Integrated deep belief network based on fuzzy partition and fuzzy weighting

Publications (2)

Publication Number Publication Date
CN110298434A true CN110298434A (en) 2019-10-01
CN110298434B CN110298434B (en) 2022-12-09

Family

ID=68027213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910445367.7A Active CN110298434B (en) 2019-05-27 2019-05-27 Integrated deep belief network based on fuzzy partition and fuzzy weighting

Country Status (1)

Country Link
CN (1) CN110298434B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444937A (en) * 2020-01-15 2020-07-24 湖州师范学院 Crowdsourcing quality improvement method based on integrated TSK fuzzy classifier
CN111461286A (en) * 2020-01-15 2020-07-28 华中科技大学 Spark parameter automatic optimization system and method based on evolutionary neural network
CN111814917A (en) * 2020-08-28 2020-10-23 成都千嘉科技有限公司 Character wheel image digital identification method with fuzzy state
CN112949353A (en) * 2019-12-10 2021-06-11 北京眼神智能科技有限公司 Iris silence living body detection method and device, readable storage medium and equipment
CN113839916A (en) * 2020-06-23 2021-12-24 天津科技大学 Network intrusion detection classification method of information classification fuzzy model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002086814A1 (en) * 2001-04-23 2002-10-31 Hrl Laboratories, Llc An on-line method for self-organized learning and extraction of fuzzy rules
CN109034231A (en) * 2018-07-17 2018-12-18 辽宁大学 The deficiency of data fuzzy clustering method of information feedback RBF network valuation
CN109376803A (en) * 2018-12-19 2019-02-22 佛山科学技术学院 Multiple neural networks classifier fusion method and device based on fuzzy complex sets value integral

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002086814A1 (en) * 2001-04-23 2002-10-31 Hrl Laboratories, Llc An on-line method for self-organized learning and extraction of fuzzy rules
CN109034231A (en) * 2018-07-17 2018-12-18 辽宁大学 The deficiency of data fuzzy clustering method of information feedback RBF network valuation
CN109376803A (en) * 2018-12-19 2019-02-22 佛山科学技术学院 Multiple neural networks classifier fusion method and device based on fuzzy complex sets value integral

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949353A (en) * 2019-12-10 2021-06-11 北京眼神智能科技有限公司 Iris silence living body detection method and device, readable storage medium and equipment
CN111444937A (en) * 2020-01-15 2020-07-24 湖州师范学院 Crowdsourcing quality improvement method based on integrated TSK fuzzy classifier
CN111461286A (en) * 2020-01-15 2020-07-28 华中科技大学 Spark parameter automatic optimization system and method based on evolutionary neural network
CN111461286B (en) * 2020-01-15 2022-03-29 华中科技大学 Spark parameter automatic optimization system and method based on evolutionary neural network
CN111444937B (en) * 2020-01-15 2023-05-12 湖州师范学院 Crowd-sourced quality improvement method based on integrated TSK fuzzy classifier
CN113839916A (en) * 2020-06-23 2021-12-24 天津科技大学 Network intrusion detection classification method of information classification fuzzy model
CN113839916B (en) * 2020-06-23 2024-03-01 天津科技大学 Network intrusion detection classification method of information classification fuzzy model
CN111814917A (en) * 2020-08-28 2020-10-23 成都千嘉科技有限公司 Character wheel image digital identification method with fuzzy state
CN111814917B (en) * 2020-08-28 2020-11-24 成都千嘉科技有限公司 Character wheel image digital identification method with fuzzy state

Also Published As

Publication number Publication date
CN110298434B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN110298434A (en) A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED
CN106599797B (en) A kind of infrared face recognition method based on local parallel neural network
CN109829541A (en) Deep neural network incremental training method and system based on learning automaton
CN103955702A (en) SAR image terrain classification method based on depth RBF network
CN109063911A (en) A kind of Load aggregation body regrouping prediction method based on gating cycle unit networks
CN106651915B (en) The method for tracking target of multi-scale expression based on convolutional neural networks
CN108805167A (en) L aplace function constraint-based sparse depth confidence network image classification method
Zeng et al. CNN model design of gesture recognition based on tensorflow framework
CN102314614A (en) Image semantics classification method based on class-shared multiple kernel learning (MKL)
CN110009030A (en) Sewage treatment method for diagnosing faults based on stacking meta learning strategy
CN110009108A (en) A kind of completely new quantum transfinites learning machine
CN106980831A (en) Based on self-encoding encoder from affiliation recognition methods
Wang et al. SVM-based deep stacking networks
CN110533072A (en) Based on the SOAP service similarity calculation and clustering method of Bigraph structure under Web environment
Sokkhey et al. Development and optimization of deep belief networks applied for academic performance prediction with larger datasets
Yin et al. A rule-based deep fuzzy system with nonlinear fuzzy feature transform for data classification
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
CN112200262B (en) Small sample classification training method and device supporting multitasking and cross-tasking
Li et al. Speech recognition based on k-means clustering and neural network ensembles
Qiao et al. SRS-DNN: a deep neural network with strengthening response sparsity
Ebrahimpour et al. Farsi handwritten digit recognition based on mixture of RBF experts
Ahmed et al. Branchconnect: Image categorization with learned branch connections
Zhu et al. Incremental classifier learning based on PEDCC-loss and cosine distance
CN109934281A (en) A kind of unsupervised training method of two sorter networks
Wang et al. Kernel-based deep learning for intelligent data analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant