CN107506590A - A kind of angiocardiopathy forecast model based on improvement depth belief network - Google Patents
A kind of angiocardiopathy forecast model based on improvement depth belief network Download PDFInfo
- Publication number
- CN107506590A CN107506590A CN201710746036.8A CN201710746036A CN107506590A CN 107506590 A CN107506590 A CN 107506590A CN 201710746036 A CN201710746036 A CN 201710746036A CN 107506590 A CN107506590 A CN 107506590A
- Authority
- CN
- China
- Prior art keywords
- network
- layer
- depth
- dbn
- forecast model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of based on angiocardiopathy (CVD) forecast model for improving depth belief network (DBN).The model is based on depth belief network (DBN), feature abstraction expression is successively carried out using multitiered network framework, then obtained optimum network parameter will be trained to be used to initialize neutral net, simultaneously, using reconstructed error (Rerror), network depth is independently determined, with reference to unsupervised training and has supervision tuning, construct and improve depth belief network forecast model, ensure stability while model prediction accuracy rate is improved.The problem of model solves Classical forecast model under conditions of non-linear complicated factor of classifying more, and predictablity rate can reduce, while also solve because of the randomness of shallow-layer neutral net initial parameter, bring the problem of prediction result variance increases.
Description
Technical field
The present invention relates to forecast model field, more particularly to a kind of based on the angiocardiopathy for improving depth belief network
Forecast model.
Background technology
At present, angiocardiopathy forecast model is broadly divided into:One kind is the Classical forecast model based on probability.Based on disease
Epidemiological process and feature make the Inference Forecast method of tendency reasoning, rely on expert's angiocardiopathy (CVD) know-how and master
Sight experience makes qualitative forecasting;Mathematics based on the large-scale follow-up queue cross section data such as disease risk factor, incidence is pre-
Survey method, such as time series predicting model, regressive prediction model establish mathematical probabilities model to rule delta data, quantitative to dig
Proportionate relationship between pick pathogenic factor, higher is required to Raw data quality.Such as SCORE, WHO and Framingham model.It is special
Point is using determining mathematical formulae, and stability is good, but includes that more non-linear complicated factor effects of classification are poor, and accuracy rate is relatively low;
Another kind of is the cardiovascular disease forecast model based on shallow-layer neutral net.Be characterized in effectively expanding prediction because
Son, quick processing fuzzy data and nonlinear data, accuracy rate are higher.Yet with the initialization of shallow-layer neural network parameter
Randomness, it may appear that be significantly less than the prediction result of Average Accuracy, multiple prediction result variance is larger.
In view of the above-mentioned problems, using deep learning method as point of penetration, feature is successively carried out using multitiered network framework and is taken out
As expression, then obtained optimum network parameter will be trained to be used to initialize neutral net, caused so as to solve random initializtion
Instability problem, reach more excellent prediction effect.
The content of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of based on the painstaking effort for improving depth belief network
Pipe disease forecasting model, solves instability problem caused by shallow-layer neural network parameter random initializtion.
It is based on improving angiocardiopathy (CVD) forecast model of depth belief network (DBN) the invention discloses one kind,
The model is based on depth belief network (DBN), successively carries out feature abstraction expression using multitiered network framework, then will train
To optimum network parameter be used to initialize neutral net, meanwhile, it is autonomous to determine that network is deep using reconstructed error (Rerror)
Degree, with reference to unsupervised training and there is supervision tuning, construct and improve depth belief network forecast model, improving model prediction
Ensure stability while accuracy rate.
To achieve the above object, the present invention is comprised the following steps based on the forecast model for improving depth belief network:
S1:Setting network initial value, learning rate η are arranged to 1, and initial error er is 0, and the difference ε of reconstructed error is arranged to
0.03, each RBM maximums cycle of training is arranged to 10 times, and weight w, visible layer biasing a, hidden layer biasing b generate smaller at random
Numerical value, training batch be arranged to 100;
S2:The training data { x } of label value will be removed as first layer network inputs, start the unsupervised pre-training stage,
The automatic value of number of input layer is sample characteristics dimension, utilizes Gibbs samplings and CD algorithm performsh1'=sigm (v1'Tw1+ b) three steps, according to these three
Formula updates each parameter, and calculation error, repeats above step, until meeting termination condition, has now trained first layer
RBM, the principle of network depth is determined according to reconstructed error method, whether calculating meets condition, stops if meeting;If not satisfied,
Then with h1Next layer of training is carried out for input;
S3:Final network depth is determined using S2, and remembers the optimal parameter of each layer, by the DBN structures trained and
Parameter passes to BP networks, builds the counterpropagation network of same depth;
S4:With top layer RBM output, it is the input of BP networks, while inputs the label value of training data, begins with prison
In the fine setting stage superintended and directed, further adjust each layer parameters of DBN;
S5:The improvement depth belief network forecast model that the test data input of not tape label is built, passes through network
The label value of calculating contrasts with true tag value, calculates prediction accuracy.
The present invention successively carries out feature abstraction expression using deep learning method as point of penetration using multitiered network framework,
Then obtained optimum network parameter will be trained to be used to initialize neutral net, so as to solve shakiness caused by random initializtion
Determine problem, establish the angiocardiopathy forecast model based on depth belief network.Meanwhile using reconstructed error (Rerror), change
Enter the forecast model based on depth trust network, can independently determine network depth, reach more excellent prediction effect.
Brief description of the drawings
Fig. 1 is the undirected structure chart of limited Boltzmann machine;
Fig. 2 is depth belief network (DBN) model structure;
Fig. 3 is depth belief network (DBN) network depth calculation flow chart;
Embodiment
The embodiment of the present invention is described below in conjunction with the accompanying drawings, so that advantages and features of the invention can be more
It is easy to be readily appreciated by one skilled in the art, apparent is clearly defined so as to be made to protection scope of the present invention.Need spy
Indescribably wake up it is noted that in the following description, when perhaps detaileds description of known function and design can desalinate master of the invention
When wanting content, these descriptions will be ignored herein.
In order to which technical scheme is better described, first to the present invention based on the machine for improving depth belief network
Learning algorithm --- depth belief network (DBN) is simply introduced.
Depth belief network (Deep belief network, DBN) is based on limited Boltzmann machine (Restricted
Boltzmann machine, RBM) propose the network architecture.RBM structure only includes visible layer and single hidden layer, two interlayers god
It is not connected to through first full connection with interlayer neuron, structure is as shown in Figure 1.
In Fig. 1, v=(v1,v2,…,vn) represent visible layer, viFor visible element;H=(h1,h2,…,hm) represent it is hidden
Hide layer, hjFor hidden unit;W is the connection weight matrix of two interlayers.By visible layer input data, (v1,v2,…,vn) represent data
Feature set, by the weight w of random initializtion and the state of each neuron, layer data is hidden in study generation.Due to same layer god
Through being not connected between member, it is determined that during neuron state, there is following property:In the case that visible element state determines, hidden unit
The activation of conditional sampling;If conversely, hidden unit state determine in the case of, the activation of each visible element conditional sampling.
One group of state (v, h) is given, the energy function of RBM models can be defined by the formula:
Wherein, a=(a1,a2,…,an) be visible element bias vector;B=(b1,b2,…,bm) it is the inclined of hidden unit
Put vector;V=(v1,v2,…,vn) be visible layer state vector;H=(h1,h2,…,hm) be hidden layer state vector;W=
(wi,j) it is connection weight matrix, wi,jFor i-th of visible element and the weights of j-th of hidden unit.
For state (v, h), providing its joint probability distribution is:
Wherein, θ={ a, b, w } is RBM network parameter;Z is referred to as normalization factor or partition function.
In actual applications, training data v probability distribution P (v), i.e. P (v, h are typically used;Marginal probability θ) point
Cloth:
Similarly, the marginal probability distribution P (h) that can must hide layer state is:
Study of the RBM to training data is by solving model optimized parameter θ, model is preferably fitted training data
Distribution, even if also sample reaches maximum in required distribution lower probability.Construct log-likelihood function:
Gradient is solved by maximum likelihood function method respectively to model parameter:
Wherein,For the expectation required by the training data conditional probability distribution of input;For the joint probability of model
The required expectation of distribution.It is expected the method for calculating gibbs (Gibbs) sampling, and in iterative calculation gradient procedure every time,
Calculation cost is excessively huge, and Hinton proposes to be used to sample to sdpecific dispersion (Contrastive divergence, CD) algorithm
Approximate calculation afterwards.
According to above formula, as the neuron state v of given visible layer, it can speculate that hidden unit activation probability is:
After obtaining hidden unit state matrix, the visible element state probability of reconstruct can be calculated according to CD algorithms:
Wherein σ is sigmoid functions, σ (x)=1/ (1+exp (- x)).
The maximum of likelihood function is gradually approached with gradient rise method (Gradient ascent) by way of iterative calculation
Value, RBM parameters more new formula are:
In above formula, η is the parameter learning speed of model;I is current iteration.Parameter θ updates according to the regular iteration of above formula,
The maximum being rapidly achieved in likelihood function gradient, as optimized parameter.
DBN is connected by multiple RBM units to be formed, using bottom RBM visible layer as input layer, with previous RBM
Hidden layer be next RBM visible layer, finally global training parameter is finely adjusted with BP networks, and export training knot
Fruit.DBN structures are as shown in Figure 2.
RBM is the neutral net based on probability, determines being generated according to probability for DBN, that is, establishes between feature and label
A kind of joint probability distribution:
P(v,h1,h2,…,hl)=P (v | h1)P(h1|h2)…P(hl-2|hl-1)P(hl-1,hl)
P(hk|hk+1) it is given hk+1H under statekConditional probability distribution;P(hl-1,hl) it is hl-1And hlJoint it is general
Rate is distributed.P (v, h) is single RBM joint probability distribution, using low layer RBM hidden layer as the visual of high-rise RBM in DBN
Layer, therefore above formula is the probability distribution of whole model.
DBN training process includes two benches, as shown in Fig. 2 respectively upward pre-training, downward fine setting.
(1) the pre-training stage:Using greedy successively training algorithm, each layer RBM is learnt with unsupervised mode of learning successively
Parameter θ={ a, b, w }.Training data is received by first layer RBM visible layer first, produces state v1, pass through initialization
Weight matrix w1Generation hidden unit state h upwards1, utilize h1Reconstruct visible layer state v'1, pass through w1Hidden layer is mapped to again
Generate new hidden unit h'1, using CD algorithms undated parameter until reconstructed error is minimum, that is, complete first layer RBM training.
The RBM of stacking, the different feature space of every layer of mapping are successively trained according to greedy learning rules.The two-way companies of RBM of top
Connect, form associative memory layer, the optimized parameter of energy each layer of associative memory.By unsupervised mode of learning, obtain DBN networks
Priori was obtained, in the more abstract feature that top layer obtains, can preferably reflect the real structural information of training data.
(2) stage is finely tuned:Using each layer parameter of the network after pre-training as initial value, using the sample of tape label to DBN moulds
Type carries out supervised learning, is finely adjusted using the top-down reverse propagated error of network as standard, further optimizes each layer RBM
Parameter.Classical fine setting algorithm conventional DBN has biological clock (wake-sleep) algorithm and back-propagation algorithm, to have with DBN
The BP networks for having same depth are trim network, and as net regression prediction output layer.The initial value of BP networks is that DBN is pre-
The high abstract characteristics collection of gained is trained, the local optimum that is absorbed in that traditional neural network random initializtion is brought is alleviated and intends with excessive
The defects of conjunction.
Pass through above step, you can build and train the DBN network models of a global optimum completely.Summarize above-mentioned study
In the stage, establish the flow of a complete DBN model.
DBN network structure is established by above procedure, the CVD training samples without label value are inputted into the bottom
RBM visible layer, feature between unsupervised ground learning data simultaneously realize Data Dimensionality Reduction.Top layer RBM will pass through the optimal spy of study
Initial value of the parameter as neutral net is levied, so as to solve the defects of random initializtion is brought, improves the stabilization of model prediction
Property.
DBN network structure is more complicated, and the solution ability to challenge is then stronger.Meanwhile the network number of plies is higher, instruction
White silk will be more difficult, and the accumulation of training error is also more serious, and model accuracy reduces on the contrary.In the application, built for particular task
Suitable DBN network structures are found, due to lacking corresponding theory support and effective training mode, it is necessary to which relying on experience sets net
Network depth and hidden unit number, cause easily occur deviation in modeling process, cost is too high.
The problem of being determined for the DBN network numbers of plies, based on the reconstructed error of every layer of RBM training, improve depth trust network
Forecast model, a kind of DBN that can automatically select network depth is established, to improve automatic point of angiocardiopathy forecast model
Analysis ability.Specific method is as follows.
In every layer of RBM, shift by Gibbs and calculated by CD algorithms, reconstruct visible layer input data, map again
To hidden layer, reconstructed error is calculated with the output data after reconstruct and the difference of initial training data:
Wherein, n is training sample number;M is the Characteristic Number of every group of sample;pijReconstructed for every layer of RBM training sample
Value;xijFor training sample actual value;pxThe number of value during to calculate.
To prevent training data over-fitting or reconstruct data deviation larger, while the training cost of network model is balanced, with
It is the cumulative stopping criterion of depth that the difference of reconstructed error, which is less than setting value, twice, i.e.,:
Wherein, L is DBN hidden layer number;K is current layer Rerror;ε is preset value.In the unsupervised pre-training stage, when
After the number of plies for reaching desired value, then the input using the output that top layer trains as BP algorithm, starts reversely fine setting parameter.According to
It is as shown in Figure 3 by the flow of Rerror structure networks.
Rerror is positively correlated with network energy E (v, h), and this coupled characteristic also demonstrates to be selected by standard of reconstructed error
DBN network depths feasibility, it was demonstrated that it is as follows.
If P is the value being calculated, X is actual label value, then P=P (v), X=P (v1), it is public according to conditional probability
Formula, have:
P=P (v)=P (v1)P(h|v1)P(v|h)
According to total probability formula, have:
Being rewritten according to above formula has:
Above equation is brought into Rerror reconstructed errors to obtain:
Because the energy and probability distribution of neutral net are proportional, i.e. P (v, h) ∝ E (v, h), therefore have:
Rerror∝P(v,h)∝E(v,h)
Above formula shows that Rerror and E (v, h) there is coupled relation, prove to rely on reconstructed error from the angle of network mechanism
The autonomous network depth for determining DBN is rational.The number of each layer neuron equally has influence to network, still lacks at present
The clearly suitable number of unit method to set up of theoretical proof, the improvement depth belief network forecast model structure emphasis of realization
Probe into Network depth capability really, every layer of neuron number is fixed.
Embodiment:
In order to illustrate the technique effect of the present invention, implementation checking is carried out to the present invention using a specific application example.
Statlog (Heart) data sets and Heart Disease Database in experimental selection UCI machine learning storehouse with
Verify model.Statlog (Heart) data set includes 270 groups of examples, and Heart Disease Database include 820 groups of realities
Example.The attribute of two datasets includes continuous, two classification, in order more classification and unordered more classified variables.As shown in table 1, select
13 attributes of identical and 1 tag along sort value are tested in two data.
Table 1
Physical significance, data unit and the order of magnitude of each attribute in the data set of selection are different, are being carried out
The processing being normalized is needed before experiment.Initial data is mapped in the range of [0,1], respectively after normalized processing
Item index is in the same order of magnitude, convenient to carry out Comprehensive Correlation evaluation.Herein data are carried out from Z-score standardized methods
Normalized.
The example that 70% is chosen in two datasets is training sample, and remaining example tests this to survey.
Experiment flow:
Experiment one:Built using training data and improve depth belief network forecast model (abbreviation Re-DBN models).
S1:Setting network initial value, learning rate η are arranged to 1, and initial error er is 0, and the difference ε of reconstructed error is arranged to
0.03, each RBM maximums cycle of training is arranged to 10 times, and weight w, visible layer biasing a, hidden layer biasing b generate smaller at random
Numerical value, training batch be arranged to 100;
S2:The training data { x } of label value will be removed as first layer network inputs, start the unsupervised pre-training stage,
The automatic value of number of input layer is sample characteristics dimension, i.e. 13 in data set hazards.Utilize Gibbs
Sampling and CD algorithm performsh1'=sigm (v1'Tw1+ b) three
Step, each parameter, and calculation error are updated according to these three formulas, repeats above step, until meeting termination condition, now
First layer RBM has been trained, the principle of network depth is determined according to reconstructed error method, whether calculating meets condition, if meeting
Stop;If not satisfied, then with h1Next layer of training is carried out for input;
S3:Final network depth is determined using S2, and remembers the optimal parameter of each layer, by the DBN structures trained and
Parameter passes to BP networks, builds the counterpropagation network of same depth;
S4:With top layer RBM output, it is the input of BP networks, while inputs the label value of training data, begins with prison
In the fine setting stage superintended and directed, further adjust each layer parameters of DBN;
S5:The improvement depth belief network forecast model that the test data input of not tape label is built, passes through network
The label value of calculating contrasts with true tag value, calculates prediction accuracy;
S6:Algorithm terminates.
Experiment two:Establish the DBN model of standard.
In order to contrast the correctness for improving the network depth that depth belief network forecast model determines, the DBN of standard is established,
The optimal network number of plies is determined by experimental method, and according to formulaThe optimal unit of each layer is selected with experimental method
Number.
Wherein, m is the number of the dimension of input data, i.e. CVD hazards;N is output layer unit number, CVD predictions with
Prediction probability is exports, i.e. n=1;N is hidden unit number;It is the symbol that rounds up;Integers of the k between [1,5], to
The section of adding unit selection, avoid blindly selecting.
Experiment three:Using setting the Logistic regressive prediction models (abbreviation Dv-Logistic models) of dummy variable to reality
Data are tested to be predicted.
Experiment four:The DBN model (abbreviation N-DBN models) that optimum cell is established is taken to reality using experimental method in standard DBN
Data are tested to be predicted.
Experimental result
Improve DBN forecast models to test in two datasets, stop when increasing to third layer in Statlog (Heart)
Only increase, model depth 4;Stop increase when increasing to the 4th layer in Heart Disease Database, model depth is
5。
Test result indicates that in every layer of RBM, with the increase of iterations, network parameter θ is gradual toward more excellent direction
It is close.Every layer of final Rerror is smaller compared to last layer, illustrates every layer of capability of fitting that can effectively improve training sample.
It is because for newly-established in the minimum Rerror that the initial value for newly increasing the Rerror of layer can train to obtain more than last layer
Hidden layer, network parameter random initializtion again, with the increase of iterative calculation, Rerror is less than the minimum value of last layer.
DBN performances are improved for contrast, mutually isostructural standard DBN model is established, i.e., 4 is established to Statlog (Heart)
Layer neutral net, 5 layers of neutral net are established to Heart Disease Database, per layer network number of unit according to formulaAnd choose best-of-breed element number using experimental method.Input layer unit number is equal to 13 feature latitudes of data set,
That is n=13;Network output for return calculate gained label probability, i.e. n=1, then second layer unit number span for [5,
9], experiment is carried out respectively to choose with minimal reconstruction error as optimal unit number.
As a result show, the RBM1 of Statlog (Heart) data set Rerror in 7 hidden units is minimum, then unit number is true
It is set to 7;Heart Disease Database Rerror in 9 hidden units are minimum, then unit number is defined as 9.Similarly, according to
The DBN structures that above method finally determines are:
Statlog(Heart):4 layer networks, each layer unit number are 13-7-6-4;
Heart Disease Database:5 layer networks, each layer unit number are 13-9-8-5-4.
The correctness of network depth determined by DBN is improved for further contrast, increases the hidden of standard DBN model successively
The number of plies, judge the accuracy of test data, to ensure only using the network number of plies as unique independent variable, determine each layer unit number with improving
DBN model is identical, as a result as indicated in the chart 2.
Table 2
Analytical table 2 is understood, increases the hierarchical structure of network, and Rerror reduces, and the training time will increase.Test data is just
True rate, reach maximum when depth is 4 for Statlog (Heart), be in depth for Heart Disease Database
Reach maximum when 5, the network depth automatically determined with improving DBN is consistent, and further demonstrates based on the heart for improving DBN foundation
Vascular diseases forecast model performance is more preferable.
The improvement depth belief network forecast model (abbreviation Re-DBN models) established is carried out using two datasets
Test, and with shallow-layer neural network prediction model, set dummy variable Logistic regressive prediction models (abbreviation Dv-
Logistic models), N-DBN model contrast and experiments, AUC is as shown in Table 3, 4.
Table 3Statlog (Heart)
Table 4Heart Disease Database
The test result of comparative analysis model above, is obtained as drawn a conclusion:
As shown in Table 3, 4, the CVD forecast models based on deep learning are compared to shallow-layer neural network model, prediction result
Variance is reduced to 5.72,4.64 by 12.665,9.051 respectively, and it is stable to show that carried model largely solves prediction
The problem of property;The accuracy of prediction result and the AUC of model are also improved, and show deep learning model while enter one
Step improves the accuracy of CVD predictions.
As shown in table 2, prediction accuracy of the standard DBN model of fixed hidden unit number to two data sets respectively 4,
Reach highest at 5 layers, it is identical with the network depth that Re-DBN models determine in experiment 1), show that model independently determines network depth
The correctness of degree, illustrate that the improved CVD forecast model practical values based on deep learning are higher.
When studying Re-DBN models to reduce the complexity calculated, the strategy of fixed hidden unit number is taken, is caused
Predictablity rate is slightly less than N-DBN models, but accuracy, stability and the independence of CVD predictions has been better balanced.It is next
Step can introduce best-of-breed element number adaptable search mechanism by studying the relation of each network interlayer in Re-DBN models, with
Further improve the performance of model.
In summary, based on angiocardiopathy (CVD) forecast model for improving depth belief network (DBN), mould is being improved
Ensure stability while type predictablity rate.Classical forecast model is solved in the condition of non-linear complicated factor of classifying more
Under, the problem of predictablity rate can reduce, while also solve because of the randomness of shallow-layer neutral net initial parameter, bring pre-
The problem of surveying the increase of result variance.
Although the illustrative embodiment of the present invention is described above, in order to the technology of the art
Personnel understand the present invention, it should be apparent that the invention is not restricted to the scope of embodiment, to the common of the art
For technical staff, if various change in the spirit and scope of the present invention that appended claim limits and determines, this
A little changes are it will be apparent that all utilize the innovation and creation of present inventive concept in the row of protection.
Claims (4)
1. the invention discloses a kind of based on angiocardiopathy (CVD) forecast model for improving depth belief network (DBN), the mould
Type is based on depth belief network (DBN), and feature abstraction expression is successively carried out using multitiered network framework, then will train what is obtained
Optimum network parameter is used to initialize neutral net, meanwhile, using reconstructed error (Rerror), network depth is independently determined, is tied
Close unsupervised training and have supervision tuning, construct a kind of angiocardiopathy forecast model based on improvement depth belief network,
Ensure stability while model prediction accuracy rate is improved.
It is 2. according to claim 1 based on the angiocardiopathy forecast model for improving depth belief network, it is characterised in that
Algorithm content comprises the following steps:
S1:Setting network initial value, learning rate η are arranged to 1, and initial error er is 0, and the difference ε of reconstructed error is arranged to 0.03,
Each RBM maximums are arranged to 10 times cycle of training, the less number that weight w, visible layer biasing a, hidden layer biasing b are generated at random
Value, training batch are arranged to 100;
S2:Using the training data { x } for removing label value as first layer network inputs, start the unsupervised pre-training stage, input
The automatic value of number of layer neuron is sample characteristics dimension, utilizes Gibbs samplings and CD algorithm performsThree steps, according to this three
Individual formula updates each parameter, and calculation error, repeats above step, until meeting termination condition, has now trained first layer
RBM, the principle of network depth is determined according to reconstructed error method, whether calculating meets condition, stops if meeting;If not satisfied,
Then with h1Next layer of training is carried out for input;
S3:Final network depth is determined using S2, and remembers the optimal parameter of each layer, the DBN structure and parameters that will be trained
BP networks are passed to, build the counterpropagation network of same depth;
S4:With top layer RBM output, it is the input of BP networks, while inputs the label value of training data, begins with the micro- of supervision
In the tune stage, further adjust each layer parameters of DBN;
S5:The improvement depth belief network forecast model that the test data input of not tape label is built, passes through network calculations
Label value contrasted with true tag value, calculate prediction accuracy.
It is 3. according to claim 1 based on the angiocardiopathy forecast model for improving depth belief network, it is characterised in that:
Forecast model is established using based on depth belief network (DBN), successively carries out feature abstraction expression using multitiered network framework, so
Afterwards obtained optimum network parameter will be trained to be used to initialize neutral net.
It is 4. according to claim 1 based on the angiocardiopathy forecast model for improving depth belief network, it is characterised in that:
Using reconstructed error (Rerror), network depth is independently determined, with reference to unsupervised training and has supervision tuning, establishes prediction mould
Type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710746036.8A CN107506590A (en) | 2017-08-26 | 2017-08-26 | A kind of angiocardiopathy forecast model based on improvement depth belief network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710746036.8A CN107506590A (en) | 2017-08-26 | 2017-08-26 | A kind of angiocardiopathy forecast model based on improvement depth belief network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107506590A true CN107506590A (en) | 2017-12-22 |
Family
ID=60692836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710746036.8A Pending CN107506590A (en) | 2017-08-26 | 2017-08-26 | A kind of angiocardiopathy forecast model based on improvement depth belief network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107506590A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537337A (en) * | 2018-04-04 | 2018-09-14 | 中航锂电技术研究院有限公司 | Lithium ion battery SOC prediction techniques based on optimization depth belief network |
CN108665001A (en) * | 2018-05-10 | 2018-10-16 | 河南工程学院 | It is a kind of based on depth confidence network across subject Idle state detection method |
CN108763418A (en) * | 2018-05-24 | 2018-11-06 | 辽宁石油化工大学 | A kind of sorting technique and device of text |
CN108960496A (en) * | 2018-06-26 | 2018-12-07 | 浙江工业大学 | A kind of deep learning traffic flow forecasting method based on improvement learning rate |
CN109034391A (en) * | 2018-08-17 | 2018-12-18 | 王玲 | The multi-source heterogeneous information RBM network integration framework and fusion method of automatic Pilot |
CN109063247A (en) * | 2018-06-26 | 2018-12-21 | 西安工程大学 | Landslide disaster forecasting procedure based on deepness belief network |
CN109102126A (en) * | 2018-08-30 | 2018-12-28 | 燕山大学 | One kind being based on depth migration learning theory line loss per unit prediction model |
CN109106384A (en) * | 2018-07-24 | 2019-01-01 | 安庆师范大学 | A kind of psychological pressure condition predicting method and system |
CN109388850A (en) * | 2018-09-04 | 2019-02-26 | 重庆科技学院 | A kind of flexible measurement method of vertical farm nutrient solution availability |
CN110059802A (en) * | 2019-03-29 | 2019-07-26 | 阿里巴巴集团控股有限公司 | For training the method, apparatus of learning model and calculating equipment |
CN110414718A (en) * | 2019-07-04 | 2019-11-05 | 上海工程技术大学 | A kind of distribution network reliability index optimization method under deep learning |
CN110543656A (en) * | 2019-07-12 | 2019-12-06 | 华南理工大学 | LED fluorescent powder glue coating thickness prediction method based on deep learning |
CN110991605A (en) * | 2019-10-25 | 2020-04-10 | 燕山大学 | Low-pressure casting mold temperature prediction method of multivariable time series deep belief network |
CN111105877A (en) * | 2019-12-24 | 2020-05-05 | 郑州科技学院 | Chronic disease accurate intervention method and system based on deep belief network |
CN111261289A (en) * | 2018-11-30 | 2020-06-09 | 上海图灵医疗科技有限公司 | Heart disease detection method based on artificial intelligence model |
CN112216399A (en) * | 2020-10-10 | 2021-01-12 | 黑龙江省疾病预防控制中心 | Food-borne disease pathogenic factor prediction method and system based on BP neural network |
-
2017
- 2017-08-26 CN CN201710746036.8A patent/CN107506590A/en active Pending
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537337A (en) * | 2018-04-04 | 2018-09-14 | 中航锂电技术研究院有限公司 | Lithium ion battery SOC prediction techniques based on optimization depth belief network |
CN108665001A (en) * | 2018-05-10 | 2018-10-16 | 河南工程学院 | It is a kind of based on depth confidence network across subject Idle state detection method |
CN108665001B (en) * | 2018-05-10 | 2020-10-27 | 河南工程学院 | Cross-tested idle state detection method based on deep belief network |
CN108763418A (en) * | 2018-05-24 | 2018-11-06 | 辽宁石油化工大学 | A kind of sorting technique and device of text |
CN109063247A (en) * | 2018-06-26 | 2018-12-21 | 西安工程大学 | Landslide disaster forecasting procedure based on deepness belief network |
CN109063247B (en) * | 2018-06-26 | 2023-04-18 | 西安工程大学 | Landslide disaster forecasting method based on deep belief network |
CN108960496A (en) * | 2018-06-26 | 2018-12-07 | 浙江工业大学 | A kind of deep learning traffic flow forecasting method based on improvement learning rate |
CN109106384A (en) * | 2018-07-24 | 2019-01-01 | 安庆师范大学 | A kind of psychological pressure condition predicting method and system |
CN109034391A (en) * | 2018-08-17 | 2018-12-18 | 王玲 | The multi-source heterogeneous information RBM network integration framework and fusion method of automatic Pilot |
CN109102126A (en) * | 2018-08-30 | 2018-12-28 | 燕山大学 | One kind being based on depth migration learning theory line loss per unit prediction model |
CN109388850A (en) * | 2018-09-04 | 2019-02-26 | 重庆科技学院 | A kind of flexible measurement method of vertical farm nutrient solution availability |
CN109388850B (en) * | 2018-09-04 | 2023-04-07 | 重庆科技学院 | Soft measurement method for validity of nutrient solution in vertical farm |
CN111261289A (en) * | 2018-11-30 | 2020-06-09 | 上海图灵医疗科技有限公司 | Heart disease detection method based on artificial intelligence model |
US11514368B2 (en) | 2019-03-29 | 2022-11-29 | Advanced New Technologies Co., Ltd. | Methods, apparatuses, and computing devices for trainings of learning models |
CN110059802A (en) * | 2019-03-29 | 2019-07-26 | 阿里巴巴集团控股有限公司 | For training the method, apparatus of learning model and calculating equipment |
CN110414718A (en) * | 2019-07-04 | 2019-11-05 | 上海工程技术大学 | A kind of distribution network reliability index optimization method under deep learning |
CN110543656A (en) * | 2019-07-12 | 2019-12-06 | 华南理工大学 | LED fluorescent powder glue coating thickness prediction method based on deep learning |
CN110991605A (en) * | 2019-10-25 | 2020-04-10 | 燕山大学 | Low-pressure casting mold temperature prediction method of multivariable time series deep belief network |
CN111105877A (en) * | 2019-12-24 | 2020-05-05 | 郑州科技学院 | Chronic disease accurate intervention method and system based on deep belief network |
CN112216399A (en) * | 2020-10-10 | 2021-01-12 | 黑龙江省疾病预防控制中心 | Food-borne disease pathogenic factor prediction method and system based on BP neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107506590A (en) | A kind of angiocardiopathy forecast model based on improvement depth belief network | |
CN108829763B (en) | Deep neural network-based attribute prediction method for film evaluation website users | |
Wu et al. | Evolving RBF neural networks for rainfall prediction using hybrid particle swarm optimization and genetic algorithm | |
CN107463966B (en) | Radar range profile's target identification method based on dual-depth neural network | |
CN110175386B (en) | Method for predicting temperature of electrical equipment of transformer substation | |
CN109242223B (en) | Quantum support vector machine evaluation and prediction method for urban public building fire risk | |
CN109102126A (en) | One kind being based on depth migration learning theory line loss per unit prediction model | |
CN111860982A (en) | Wind power plant short-term wind power prediction method based on VMD-FCM-GRU | |
CN102622515B (en) | A kind of weather prediction method | |
CN106650920A (en) | Prediction model based on optimized extreme learning machine (ELM) | |
CN104636801A (en) | Transmission line audible noise prediction method based on BP neural network optimization | |
CN109034054B (en) | Harmonic multi-label classification method based on LSTM | |
CN106529820A (en) | Operation index prediction method and system | |
CN106295874A (en) | Traffic flow parameter Forecasting Methodology based on deep belief network | |
CN105975573A (en) | KNN-based text classification method | |
CN110321361A (en) | Examination question based on improved LSTM neural network model recommends determination method | |
CN106022954A (en) | Multiple BP neural network load prediction method based on grey correlation degree | |
CN103324954A (en) | Image classification method based on tree structure and system using same | |
CN115186097A (en) | Knowledge graph and reinforcement learning based interactive recommendation method | |
CN110298434A (en) | A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED | |
CN109978612A (en) | A kind of convenience store's Method for Sales Forecast method based on deep learning | |
CN110414718A (en) | A kind of distribution network reliability index optimization method under deep learning | |
CN108364073A (en) | A kind of Multi-label learning method | |
CN108647772A (en) | A method of it is rejected for slope monitoring data error | |
CN111144500A (en) | Differential privacy deep learning classification method based on analytic Gaussian mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171222 |