CN103258243A - Tube explosion predicting method based on grey neural network - Google Patents

Tube explosion predicting method based on grey neural network Download PDF

Info

Publication number
CN103258243A
CN103258243A CN2013101518441A CN201310151844A CN103258243A CN 103258243 A CN103258243 A CN 103258243A CN 2013101518441 A CN2013101518441 A CN 2013101518441A CN 201310151844 A CN201310151844 A CN 201310151844A CN 103258243 A CN103258243 A CN 103258243A
Authority
CN
China
Prior art keywords
neural network
sequence
booster
data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101518441A
Other languages
Chinese (zh)
Other versions
CN103258243B (en
Inventor
徐哲
杨洁
车栩龙
孔亚广
薛安克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201310151844.1A priority Critical patent/CN103258243B/en
Priority claimed from CN201310151844.1A external-priority patent/CN103258243B/en
Publication of CN103258243A publication Critical patent/CN103258243A/en
Application granted granted Critical
Publication of CN103258243B publication Critical patent/CN103258243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a tube explosion predicting method based on a grey neural network. The tube explosion predicting method based on the grey neural network comprises the steps that firstly predicting is conducted on tube explosion rate sequences through static grey modeling according to given tube explosion factors and tube explosion rate data sequences, and predicting results and the original tube explosion rate sequences are compared to obtain residual errors; then neural network approximate models are established among the residual errors and the tube explosion factors by using the neural network, and the neural network being repeatedly trained is a mapping relation among the residual errors and grey model data; in the final predicting process, predicted values of the grey models are compensated by compensation values of the neural network. According to the tube explosion predicting method based on the grey neural network, the grey neural network models are established by combining a grey modeling method and a neural network model, the defect that a traditional tube explosion model needs a large amount of data is overcome, problems of predicting small samples can be excellently solved, and predicting accuracy is improved.

Description

Booster Forecasting Methodology based on grey neural network
Technical field
The invention belongs to the urban water supply field, specifically is a kind of water supply network booster Forecasting Methodology based on grey neural network.
Background technology
Water supply network is one of the important foundation facility in city, also is the important component part of urban lifeline engineering.The pipe network booster can cause the waste of great lot of water resources, threatens water supply security, influences ordinary production and life.Historical leakage loss data are analyzed, set up effective booster forecast model, can from the source, control the pipe network leakage loss, accomplish prevention early, early find, safeguard scientifically and rationally, realize the ACTIVE CONTROL of leakage loss.
At present, the booster forecast model mainly comprises physical model and statistical model.Physical model is generally by the load of dissection on pipeline, the ability of the anti-load of pipeline, and degree, the scope of the inside and outside suffered corrosion of pipeline wait to predict pipeline accident.Statistical model is foundation with the historical booster data of pipe network then, sets up pipe explosion accident with the method for statistics and quantizes rule.In recent years, data-driven modeling technique based on artificial intelligence comes into one's own, and aspect booster forecasting research existing the application, in article Assessing pipe failure rate and mechanical reliability of water distribution networks using data driven modeling, propose and set up booster model based on artificial neural network and adaptability nerve-fuzzy inference system as Tabesh M. etc.
Yet, tradition booster model needs lot of data, as the pipe characteristic data, accurately and enough long pipeline operation maintenance historical datas etc., but water supply network system bulky complex, the detail record that leak source takes place is difficult to accurately, gathers comprehensively, is badly in need of the existing low volume data of research and utilization and carries out booster analysis and Forecasting Methodology.Simultaneously, the leakage loss of considering water supply network is subjected to the influence of various factorss such as the pipeline time limit, pipeline material, temperature, field engineering, uncertain factor is more, join together to regard as a big system if will influence the various complicated factors of pipe network booster, this system has determinacy concurrently with uncertain, can regard a typical gray system as.
The gray system modeling method can not considered the regularity of distribution, variation tendency, can find out the variation relation of system from a small amount of sample, and modeling method is simple.But gray system does not possess computation capability, and it is high that model accuracy is owed.And neural network can realize Nonlinear Mapping, has advantages such as the storage of parallel computation, distributed information, fault-tolerant ability are strong, adaptive learning function.If in conjunction with constituting Grey Neural Network Model, then advantage has concurrently, can solve the small sample forecasting problem preferably with both, improve precision of prediction.
Summary of the invention
The objective of the invention is to overcome the deficiency in the existing method, a kind of water supply network booster Forecasting Methodology based on grey neural network is proposed, precision of prediction can be improved effectively, and small sample prediction and large sample prediction can be applicable to simultaneously to pipe network booster historical record is less demanding.
This method is achieved through the following technical solutions: at first, for given booster factor and booster rate data sequence, by static grey modeling, booster rate sequence is predicted.Predicting the outcome compares with former booster rate sequence, obtains residual error.Then, utilize neural network between these residual sum booster factors, to set up the neural network approximate model.The neural network of process repetition training is exactly the mapping relations between the selected gray model data of residual sum.When predicting at last, again the predicted value of the gray model offset with neural network is compensated.
Concrete modeling process is as follows:
(1) collects, puts in order statistics booster data
The general factor that influences booster has: pipe workpiece quality, interface shape, caliber, buried depth of pipeline, temperature Change, physical features sedimentation and load, pipe network running pressure, corrosive pipeline etc.Wherein some factor can quantize, and some factor can't quantize.From the booster database, the quantifiable factor of statistical study such as caliber, buried depth, pipe network running pressure, pipe range etc., and calculate booster rate (the general year booster rate of calculating).
(2) set up model
Based on step (1) statistics collection
Figure 2013101518441100002DEST_PATH_IMAGE002
Individual booster factor and booster rate
Figure 2013101518441100002DEST_PATH_IMAGE004
, set up with
Figure 860317DEST_PATH_IMAGE002
Individual booster factor is factor variable, is the behavior variable with the booster rate
Figure 2013101518441100002DEST_PATH_IMAGE006
Model.Concrete steps are as follows:
1) establishes (this sequence is represented the behavior variable for the system features data sequence
Figure 2013101518441100002DEST_PATH_IMAGE010
Individual observed reading)
Figure 2013101518441100002DEST_PATH_IMAGE012
Figure 2013101518441100002DEST_PATH_IMAGE014
Figure 2013101518441100002DEST_PATH_IMAGE016
This
Figure 525261DEST_PATH_IMAGE002
Individual sequence is called the (expression of correlative factor sequence Individual factor variable individual observed reading separately).
The one-accumulate generation of above-mentioned each data sequence (
Figure 2013101518441100002DEST_PATH_IMAGE018
) sequence is designated as
Figure 2013101518441100002DEST_PATH_IMAGE020
(
Figure 2013101518441100002DEST_PATH_IMAGE022
), so-called one-accumulate generates namely:
If
Figure 2013101518441100002DEST_PATH_IMAGE024
(
Figure 637890DEST_PATH_IMAGE022
) be original series,
Figure 2013101518441100002DEST_PATH_IMAGE026
Be the sequence operator
Figure 2013101518441100002DEST_PATH_IMAGE028
, wherein
Figure 2013101518441100002DEST_PATH_IMAGE030
, claim
Figure 327629DEST_PATH_IMAGE026
For
Figure 2013101518441100002DEST_PATH_IMAGE032
The one-accumulate generating operator, and the new sequence that generates
Figure 963140DEST_PATH_IMAGE020
Be the one-accumulate formation sequence.
2) based on
Figure 292491DEST_PATH_IMAGE020
(
Figure 491391DEST_PATH_IMAGE022
) set up model
Figure 2013101518441100002DEST_PATH_IMAGE034
Argument List wherein
Figure 2013101518441100002DEST_PATH_IMAGE036
, available least-squares estimation and obtain into
Figure 2013101518441100002DEST_PATH_IMAGE038
Wherein:
Figure 2013101518441100002DEST_PATH_IMAGE040
?
Figure 2013101518441100002DEST_PATH_IMAGE042
3) applying step 2) in model predict that the behavior variable sequence that obtains predicting is
Figure 2013101518441100002DEST_PATH_IMAGE044
By above step, just set up with Individual booster factor is factor variable, is the behavior variable with the booster rate
Figure 924873DEST_PATH_IMAGE006
Model.
(3) set up neural network model
Correlative factor sequence with one-accumulate
Figure 492252DEST_PATH_IMAGE020
(
Figure 2013101518441100002DEST_PATH_IMAGE046
) as the input of BP neural network (can use other neural network models), by
Figure 291581DEST_PATH_IMAGE006
The characteristic sequence that model prediction obtains
Figure 2013101518441100002DEST_PATH_IMAGE048
Characteristic sequence with one-accumulate
Figure 2013101518441100002DEST_PATH_IMAGE050
Residual sequence
Figure 2013101518441100002DEST_PATH_IMAGE052
As the output of network, wherein , set up the BP neural network model.
At first, be in the difference of the order of magnitude between state of saturation and data for fear of the hidden layer neuron, guarantee that network has enough input susceptibility and good fitness to sample, before the BP neural network is trained, carry out pre-service to learning auspicious notebook data.Namely all data are carried out normalized, sample data is converted into
Figure 2013101518441100002DEST_PATH_IMAGE056
Value on the interval.Certainly, when using through the network after the study, also should carry out anti-normalization to the output data of network, recover final predicted value.
Normalized specific algorithm is:
In the formula ---one group of collected data;
Figure 2013101518441100002DEST_PATH_IMAGE062
---the minimum value in these group data;
Figure 2013101518441100002DEST_PATH_IMAGE064
---the maximal value in these group data;
Figure 2013101518441100002DEST_PATH_IMAGE066
---the data after the mapping.
Anti-normalization specific algorithm is:
Figure 2013101518441100002DEST_PATH_IMAGE068
Then, the tool box of using among the Matlab comes training network with basic back-propagation algorithm (can adopt other learning algorithms), to obtain the corresponding weights of hidden layer and output layer.Like this, the neural network through repetition training is exactly the mapping relations of residual sequence and one-accumulate booster correlative factor sequence.
(4) prediction booster rate
During prediction, earlier will The predicted value of model Offset with neural network
Figure 2013101518441100002DEST_PATH_IMAGE070
Carry out error compensation, to obtain predicted value
Figure 2013101518441100002DEST_PATH_IMAGE072
Once tire out then to subtract to generate and obtain
Figure 2013101518441100002DEST_PATH_IMAGE074
What is called is once tired to subtract generation namely:
If
Figure 984971DEST_PATH_IMAGE024
(
Figure 73144DEST_PATH_IMAGE022
), be original series
Figure 716615DEST_PATH_IMAGE026
Be the sequence operator
Figure 114098DEST_PATH_IMAGE028
, wherein
Claim
Figure 843020DEST_PATH_IMAGE026
For
Figure 731954DEST_PATH_IMAGE032
The one-accumulate generating operator, be designated as
Figure 2013101518441100002DEST_PATH_IMAGE078
So far, through step (1), (2), and (3), (4) have just set up the water supply network booster forecast model of grey neural network.
The inventive method is set up Grey Neural Network Model in conjunction with grey modeling method and neural network model, overcoming traditional booster model needs the shortcoming of lot of data, can solve the small sample forecasting problem preferably, improve precision of prediction, simultaneously equally suitable for large sample.Especially for the record and the late water undertaking of maintenance starting of booster data, this method presses for.
Description of drawings
Fig. 1 is theory diagram of the present invention.
Embodiment
Provide an embodiment below, the specific embodiment of the present invention is described in further detail.Following examples only are used for explanation the present invention, but are not used for limiting the scope of the invention.
(1) collects, puts in order statistics booster data
From the booster database in certain zone of supplying water, the caliber of statistics pipeline, pipe age, pressure data, and calculate booster rate (the general year booster rate of calculating).
Concrete statistical method is:
Earlier with all pipeline sections according to caliber
Figure 292248DEST_PATH_IMAGE026
(unit
Figure 2013101518441100002DEST_PATH_IMAGE080
) be divided into
Figure 876944DEST_PATH_IMAGE010
Group.
Then, calculate total pipe range of every group :
Figure 2013101518441100002DEST_PATH_IMAGE084
Wherein
Figure 2013101518441100002DEST_PATH_IMAGE086
Be the pipeline section numbering,
Figure 2013101518441100002DEST_PATH_IMAGE088
Be pipeline section Length, unit is
Figure 2013101518441100002DEST_PATH_IMAGE090
Based on the weighted mean pipe age of pipe range :
Figure 2013101518441100002DEST_PATH_IMAGE094
Wherein
Figure 2013101518441100002DEST_PATH_IMAGE096
Be pipeline section
Figure 395574DEST_PATH_IMAGE086
Pipe age, unit is year.
Pipeline flow weighted mean absolute pressure based on the pipe network analog result
Figure 2013101518441100002DEST_PATH_IMAGE098
:
Figure 2013101518441100002DEST_PATH_IMAGE100
Wherein
Figure 2013101518441100002DEST_PATH_IMAGE102
Be pipeline section
Figure 623424DEST_PATH_IMAGE086
Average absolute pressure, unit is
Figure 2013101518441100002DEST_PATH_IMAGE104
Figure 2013101518441100002DEST_PATH_IMAGE106
Be pipeline section
Figure 2013101518441100002DEST_PATH_IMAGE108
Flow, unit is
Figure 2013101518441100002DEST_PATH_IMAGE110
And the average booster number of times of annual per unit pipe range and booster rate :
Figure 2013101518441100002DEST_PATH_IMAGE112
Wherein
Figure 754639DEST_PATH_IMAGE004
Be the booster rate, unit is
Figure 2013101518441100002DEST_PATH_IMAGE116
Be statistics year numbering;
Figure 2013101518441100002DEST_PATH_IMAGE118
Be Group statistics year booster number of times;
Figure 520263DEST_PATH_IMAGE010
Be statistics year sum.
(2) set up
Figure 2013101518441100002DEST_PATH_IMAGE120
Model
Based on 3 booster factors and the booster rate of step (1) statistics collection, altogether
Figure 446761DEST_PATH_IMAGE010
Group.Foundation is with caliber
Figure 371992DEST_PATH_IMAGE026
, the pipe age
Figure 658617DEST_PATH_IMAGE092
, pressure Be factor variable, with the booster rate
Figure 817514DEST_PATH_IMAGE004
For the behavior variable
Figure 230041DEST_PATH_IMAGE120
Model.Concrete steps are as follows:
1) order expression booster rate
Figure 320357DEST_PATH_IMAGE004
Sequence;
Figure 2013101518441100002DEST_PATH_IMAGE122
Be caliber Sequence;
For managing age
Figure 504661DEST_PATH_IMAGE092
Sequence;
Be pressure Sequence;
The one-accumulate generation of above-mentioned each data sequence (
Figure 32912DEST_PATH_IMAGE018
) sequence is designated as
Figure 334580DEST_PATH_IMAGE020
(
Figure 2013101518441100002DEST_PATH_IMAGE124
), so-called one-accumulate generates namely:
If
Figure 505274DEST_PATH_IMAGE024
(
Figure 954710DEST_PATH_IMAGE124
) be original series,
Figure 324511DEST_PATH_IMAGE026
Be the sequence operator
Figure 28156DEST_PATH_IMAGE028
, wherein
Figure 825211DEST_PATH_IMAGE030
, claim
Figure 496364DEST_PATH_IMAGE026
For The one-accumulate generating operator, and the new sequence that generates
Figure 493587DEST_PATH_IMAGE020
Be the one-accumulate formation sequence.
2) utilize
Figure 461543DEST_PATH_IMAGE020
(
Figure 619992DEST_PATH_IMAGE124
) set up model
Figure 2013101518441100002DEST_PATH_IMAGE126
Argument List wherein
Figure 2013101518441100002DEST_PATH_IMAGE128
, available least-squares estimation and obtain into
Wherein:
Figure 2013101518441100002DEST_PATH_IMAGE130
3) applying step 2) in model predict behavior variable sequence (the booster rate that obtains predicting ) be
Figure 21127DEST_PATH_IMAGE044
(3) set up neural network model
Correlative factor sequence with one-accumulate
Figure 604555DEST_PATH_IMAGE020
(
Figure 2013101518441100002DEST_PATH_IMAGE134
) as the input of BP neural network (can use other neural network models, present embodiment adopts the BP neural network), by Model prediction obtains sequence With sequence
Figure 24669DEST_PATH_IMAGE050
Residual sequence
Figure 157710DEST_PATH_IMAGE052
As the output of network, wherein
Figure 211117DEST_PATH_IMAGE054
Then, the tool box of using among the Matlab comes training network with basic back-propagation algorithm (can other corresponding learning algorithms), to obtain the corresponding weights of hidden layer and output layer.Like this, the neural network through repetition training is exactly the mapping relations of residual sequence and one-accumulate booster correlative factor sequence.
Concrete steps are as follows:
1) with the correlative factor sequence of one-accumulate
Figure 863946DEST_PATH_IMAGE020
(
Figure 344606DEST_PATH_IMAGE134
) constitute the input matrix of BP neural network
Figure 2013101518441100002DEST_PATH_IMAGE136
With residual sequence
Figure 699364DEST_PATH_IMAGE052
Sharp formation network output matrix
Figure 2013101518441100002DEST_PATH_IMAGE138
2) data normalization is handled.Utilize the normalization formula
Figure 369511DEST_PATH_IMAGE058
Respectively the input and output matrix is carried out normalized.
For input matrix
Figure 2013101518441100002DEST_PATH_IMAGE140
Each row Make normalized respectively, obtain normalization matrix
Figure 2013101518441100002DEST_PATH_IMAGE142
For output matrix
Figure 2013101518441100002DEST_PATH_IMAGE144
Make normalized, obtain
Figure 2013101518441100002DEST_PATH_IMAGE146
3) make up the BP neural network
Present embodiment uses three layers of feedforward neural network.Form input layer by three neurons, i.e. caliber behind one-accumulate
Figure 650112DEST_PATH_IMAGE026
, the pipe age
Figure 695428DEST_PATH_IMAGE092
, pressure A hidden layer, the neuron number of hidden layer be border problem complexity and deciding factually, and this example adopts 12 neurons (as the tubing factor, pipe joint problem, temperature variation, factors such as buried depth of pipeline); Output layer is made up of a neuron, i.e. the booster rate
Figure 452480DEST_PATH_IMAGE004
Concrete steps are as follows:
A. build the BP neural network framework, call the newff function in the Matlab function library
Net=newff(minmax(P’),[12,1],'tansig','purelin','traingdm')
Wherein minmax (P ') is minimum and the maximal value of the every row of matrix P ', and [12,1] expression hidden layer has 12 neurons, and output layer has 1 neuron; Tansig is the input layer transport function; Purelin is the output layer transport function; Trainlm is the training function based on the l-m algorithm.
B. train the BP neural network
A. initialization network.Net.initFcn decides the initialization function of whole network.The parameter net.layer{i}.initFcn initialization function that decides each layer.The initwb function according to the initiation parameter of each layer oneself (initializes weights is made as rands usually for net.inputWeights{i, j}.initFcn) initializes weights matrix and biasing, and concrete grammar is as follows:
net.layers{1}.initFcn=’initwb’;
net.inputWeights{1,1}.initFcn=’rands’;
net,layerWeights{2,1}.initFcn=’rands’;
net.biases{1,1}.initFcn=’rands’;
net.biases{1,1}.initFcn=’rands’;
net=init(net);
Net.IW{1,1} be input layer to the weight matrix of hidden layer,
Net.LW{2,1} are that hidden layer is to the weight matrix of output layer;
Net.b{1,1} are the threshold values vector of hidden layer,
Net.b{2,1} are the threshold values of output contact;
B., the step number that network training number of times, training objective error is set and is used for showing
net.trainParam.epochs=1500;
net.trainParam.goal=0.0008;
net.trainParam.show=100;
It was 1500 steps that the network training number of times is set, and the training objective error is 0.0008, showed that the training step number is 100.
C. utilize input matrix
Figure 602838DEST_PATH_IMAGE142
And output matrix
Figure 869872DEST_PATH_IMAGE146
, by calling the train function, net=train (net, P ', T ') carries out network training until convergence.
The mapping relations of residual sequence and one-accumulate booster correlative factor sequence have so just been set up.
(4) prediction booster rate
Prediction steps is as follows:
1) with the prediction output matrix of neural network
Figure 68772DEST_PATH_IMAGE146
, use formula
Figure 285121DEST_PATH_IMAGE068
Carry out anti-normalization, obtain the prediction residual matrix
2) will
Figure 360710DEST_PATH_IMAGE120
The predicted value of model
Figure 97722DEST_PATH_IMAGE048
The residual matrix that obtains with neural network prediction
Figure 434156DEST_PATH_IMAGE070
Carry out error compensation, to obtain predicted value
Figure 864001DEST_PATH_IMAGE072
Once tire out then to subtract to generate and obtain
Figure 902364DEST_PATH_IMAGE074
What is called is once tired to subtract generation namely:
If
Figure 443067DEST_PATH_IMAGE024
(
Figure 920095DEST_PATH_IMAGE022
), be original series
Figure 255261DEST_PATH_IMAGE026
Be the sequence operator
Figure 46500DEST_PATH_IMAGE028
, wherein
Figure 125314DEST_PATH_IMAGE076
Claim
Figure 436341DEST_PATH_IMAGE026
For
Figure 942408DEST_PATH_IMAGE032
The one-accumulate generating operator, be designated as
Figure 158626DEST_PATH_IMAGE078
So far, through step (1), (2), (3), (4), just set up based on grey neural network with caliber
Figure 103448DEST_PATH_IMAGE026
, the pipe age
Figure 455932DEST_PATH_IMAGE092
, pressure Be factor variable, with the booster rate
Figure 649464DEST_PATH_IMAGE004
Be behavior variable water supply network booster forecast model.

Claims (1)

1. based on the booster Forecasting Methodology of grey neural network, it is characterized in that this method may further comprise the steps:
Step (1) is collected booster factor, arrangement statistics booster data and is calculated the booster rate, and described booster data are quantifiable booster factor, comprise caliber, buried depth, pipe network running pressure and pipe range;
Step (2) is set up Model;
Based on step (1) statistics collection
Figure 2013101518441100001DEST_PATH_IMAGE004
Individual booster factor and 1 booster rate
Figure 2013101518441100001DEST_PATH_IMAGE006
, set up with
Figure 220365DEST_PATH_IMAGE004
Individual booster factor is factor variable, is the behavior variable with the booster rate
Figure 692935DEST_PATH_IMAGE002
Model, concrete steps are as follows:
1) establishes
Figure 2013101518441100001DEST_PATH_IMAGE008
Be the system features data sequence, this sequence is represented the behavior variable
Figure 2013101518441100001DEST_PATH_IMAGE010
Individual observed reading;
Figure 2013101518441100001DEST_PATH_IMAGE012
Figure 2013101518441100001DEST_PATH_IMAGE014
Figure 2013101518441100001DEST_PATH_IMAGE016
Figure 2013101518441100001DEST_PATH_IMAGE018
This
Figure 291538DEST_PATH_IMAGE004
Individual sequence is called the correlative factor sequence, expression Individual factor variable separately
Figure 378759DEST_PATH_IMAGE010
Individual observed reading;
The one-accumulate of above-mentioned each data sequence generates
Figure 2013101518441100001DEST_PATH_IMAGE020
Sequence is designated as
Figure 2013101518441100001DEST_PATH_IMAGE022
,
Figure 2013101518441100001DEST_PATH_IMAGE024
, so-called one-accumulate generates namely:
If
Figure 2013101518441100001DEST_PATH_IMAGE026
Be original series,
Figure 2013101518441100001DEST_PATH_IMAGE028
Be the sequence operator , wherein ,
Claim
Figure 515955DEST_PATH_IMAGE028
For
Figure 2013101518441100001DEST_PATH_IMAGE034
The one-accumulate generating operator, and the new sequence that generates
Figure 675672DEST_PATH_IMAGE022
Be the one-accumulate formation sequence;
2) based on sequence
Figure 88199DEST_PATH_IMAGE022
Set up model
Figure 2013101518441100001DEST_PATH_IMAGE036
Argument List wherein , available least-squares estimation and obtain into
Wherein:
?
Figure 2013101518441100001DEST_PATH_IMAGE044
3) applying step 2) in model predict that the behavior variable sequence that obtains predicting is:
Figure 2013101518441100001DEST_PATH_IMAGE046
By above step, just set up with
Figure 601351DEST_PATH_IMAGE004
Individual booster factor is factor variable, is the behavior variable with the booster rate
Figure 782934DEST_PATH_IMAGE002
Model;
Step (3) is set up neural network model;
Correlative factor sequence with one-accumulate
Figure 110622DEST_PATH_IMAGE022
As the input of BP neural network, by
Figure 10445DEST_PATH_IMAGE002
The characteristic sequence that model prediction obtains Characteristic sequence with one-accumulate
Figure 2013101518441100001DEST_PATH_IMAGE050
Residual sequence
Figure 2013101518441100001DEST_PATH_IMAGE052
As the output of network, wherein
Figure 2013101518441100001DEST_PATH_IMAGE054
, set up the BP neural network model;
At first, be in the difference of the order of magnitude between state of saturation and data for fear of the hidden layer neuron, guarantee that network has enough input susceptibility and good fitness to sample, before the BP neural network is trained, carry out pre-service to learning auspicious notebook data; Namely all data are carried out normalized, sample data is converted into
Figure 2013101518441100001DEST_PATH_IMAGE056
Value on the interval; When using through the network after the study, also should carry out anti-normalization to the output data of network, recover final predicted value;
Normalized specific algorithm is:
Figure 2013101518441100001DEST_PATH_IMAGE058
In the formula
Figure 2013101518441100001DEST_PATH_IMAGE060
Represent one group of collected data;
Figure 2013101518441100001DEST_PATH_IMAGE062
Represent the minimum value in these group data;
Figure 2013101518441100001DEST_PATH_IMAGE064
Represent the maximal value in these group data;
Figure 2013101518441100001DEST_PATH_IMAGE066
Data after the expression mapping;
Anti-normalization specific algorithm is:
Figure 2013101518441100001DEST_PATH_IMAGE068
Then, the tool box of using among the Matlab comes training network with basic back-propagation algorithm, to obtain the corresponding weights of hidden layer and output layer; Like this, the neural network through repetition training is exactly the mapping relations of residual sequence and one-accumulate booster correlative factor sequence;
Step (4) prediction booster rate;
During prediction, earlier will
Figure 202654DEST_PATH_IMAGE002
The predicted value of model
Figure 238743DEST_PATH_IMAGE048
Offset with neural network
Figure 2013101518441100001DEST_PATH_IMAGE070
Carry out error compensation, to obtain predicted value
Figure 2013101518441100001DEST_PATH_IMAGE072
Once tire out then to subtract to generate and obtain
Figure 2013101518441100001DEST_PATH_IMAGE074
What is called is once tired to subtract generation namely:
If , be original series
Figure 62136DEST_PATH_IMAGE028
Be the sequence operator
Figure 166358DEST_PATH_IMAGE030
, wherein
Claim
Figure 260216DEST_PATH_IMAGE028
For
Figure 57270DEST_PATH_IMAGE034
The one-accumulate generating operator, be designated as
Figure 2013101518441100001DEST_PATH_IMAGE078
So far, through step (1), (2), and (3), (4) have just set up the water supply network booster forecast model of grey neural network.
CN201310151844.1A 2013-04-27 Tube explosion prediction method based on grey neural network Active CN103258243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310151844.1A CN103258243B (en) 2013-04-27 Tube explosion prediction method based on grey neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310151844.1A CN103258243B (en) 2013-04-27 Tube explosion prediction method based on grey neural network

Publications (2)

Publication Number Publication Date
CN103258243A true CN103258243A (en) 2013-08-21
CN103258243B CN103258243B (en) 2016-11-30

Family

ID=

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045785A (en) * 2017-02-08 2017-08-15 河南理工大学 A kind of method of the short-term traffic flow forecast based on grey ELM neutral nets
CN107145691A (en) * 2017-06-23 2017-09-08 广东青藤环境科技有限公司 A kind of public supply mains booster prediction analysis method
CN107704958A (en) * 2017-09-30 2018-02-16 渤海大学 A kind of thermal power plant's generated energy Forecasting Methodology of multivariable modeling
CN108573076A (en) * 2017-03-09 2018-09-25 中国石油化工股份有限公司 A kind of prediction technique of shale gas pressing crack construction accident
CN109558900A (en) * 2018-11-16 2019-04-02 佛山科学技术学院 A kind of water supply pipe explosion time forecasting methods neural network based and device
CN109886506A (en) * 2019-03-14 2019-06-14 重庆大学 A kind of water supply network booster risk analysis method
CN110383308A (en) * 2017-04-13 2019-10-25 甲骨文国际公司 Predict the new type auto artificial intelligence system of pipe leakage
US10578543B2 (en) 2016-11-25 2020-03-03 Tata Consultancy Services Limited Ranking pipes for maintenance in pipe networks using approximate hydraulic metrics
CN111008606A (en) * 2019-12-10 2020-04-14 上海商汤智能科技有限公司 Image prediction method and device, electronic equipment and storage medium
CN111539515A (en) * 2020-04-21 2020-08-14 中国电子科技集团公司第三十八研究所 Complex equipment maintenance decision method based on fault prediction
CN114519262A (en) * 2022-01-25 2022-05-20 河南大学 Air target threat prediction method based on improved GM (1,1) model
CN115358475A (en) * 2022-08-29 2022-11-18 河南农业大学 Disaster prediction method and system based on support vector machine and gray BP neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101886742A (en) * 2010-06-17 2010-11-17 北京工业大学 Leakage and pipe explosion early warning system for city water supply network
CN102032935A (en) * 2010-12-07 2011-04-27 杭州电子科技大学 Soft measurement method for sewage pumping station flow of urban drainage converged network
CN102073785A (en) * 2010-11-26 2011-05-25 哈尔滨工程大学 Daily gas load combination prediction method based on generalized dynamic fuzzy neural network
CN102174994A (en) * 2011-03-11 2011-09-07 天津大学 Pipe burst accident on-line positioning system for urban water supply pipeline network
CN102222169A (en) * 2011-06-21 2011-10-19 天津大学 Method for predicting and analyzing pipe burst of urban water supply network
CN102867132A (en) * 2012-10-16 2013-01-09 南京航空航天大学 Aviation direct-current converter online fault combined prediction method based on fractional order wavelet transformation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101886742A (en) * 2010-06-17 2010-11-17 北京工业大学 Leakage and pipe explosion early warning system for city water supply network
CN102073785A (en) * 2010-11-26 2011-05-25 哈尔滨工程大学 Daily gas load combination prediction method based on generalized dynamic fuzzy neural network
CN102032935A (en) * 2010-12-07 2011-04-27 杭州电子科技大学 Soft measurement method for sewage pumping station flow of urban drainage converged network
CN102174994A (en) * 2011-03-11 2011-09-07 天津大学 Pipe burst accident on-line positioning system for urban water supply pipeline network
CN102222169A (en) * 2011-06-21 2011-10-19 天津大学 Method for predicting and analyzing pipe burst of urban water supply network
CN102867132A (en) * 2012-10-16 2013-01-09 南京航空航天大学 Aviation direct-current converter online fault combined prediction method based on fractional order wavelet transformation

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10578543B2 (en) 2016-11-25 2020-03-03 Tata Consultancy Services Limited Ranking pipes for maintenance in pipe networks using approximate hydraulic metrics
CN107045785B (en) * 2017-02-08 2019-10-22 河南理工大学 A method of the short-term traffic flow forecast based on grey ELM neural network
CN107045785A (en) * 2017-02-08 2017-08-15 河南理工大学 A kind of method of the short-term traffic flow forecast based on grey ELM neutral nets
CN108573076A (en) * 2017-03-09 2018-09-25 中国石油化工股份有限公司 A kind of prediction technique of shale gas pressing crack construction accident
CN108573076B (en) * 2017-03-09 2021-08-31 中国石油化工股份有限公司 Prediction method for shale gas fracturing construction accident
CN110383308A (en) * 2017-04-13 2019-10-25 甲骨文国际公司 Predict the new type auto artificial intelligence system of pipe leakage
CN110383308B (en) * 2017-04-13 2023-12-26 甲骨文国际公司 Novel automatic artificial intelligence system for predicting pipeline leakage
CN107145691A (en) * 2017-06-23 2017-09-08 广东青藤环境科技有限公司 A kind of public supply mains booster prediction analysis method
CN107704958A (en) * 2017-09-30 2018-02-16 渤海大学 A kind of thermal power plant's generated energy Forecasting Methodology of multivariable modeling
CN109558900A (en) * 2018-11-16 2019-04-02 佛山科学技术学院 A kind of water supply pipe explosion time forecasting methods neural network based and device
CN109558900B (en) * 2018-11-16 2023-11-03 佛山科学技术学院 Neural network-based water supply pipe burst time prediction method and device
CN109886506A (en) * 2019-03-14 2019-06-14 重庆大学 A kind of water supply network booster risk analysis method
CN109886506B (en) * 2019-03-14 2023-05-23 重庆大学 Water supply network pipe explosion risk analysis method
CN111008606A (en) * 2019-12-10 2020-04-14 上海商汤智能科技有限公司 Image prediction method and device, electronic equipment and storage medium
CN111008606B (en) * 2019-12-10 2024-04-16 上海商汤智能科技有限公司 Image prediction method and device, electronic equipment and storage medium
CN111539515A (en) * 2020-04-21 2020-08-14 中国电子科技集团公司第三十八研究所 Complex equipment maintenance decision method based on fault prediction
CN114519262B (en) * 2022-01-25 2024-02-20 河南大学 Air target threat prediction method based on improved GM (1, 1) model
CN114519262A (en) * 2022-01-25 2022-05-20 河南大学 Air target threat prediction method based on improved GM (1,1) model
CN115358475B (en) * 2022-08-29 2023-05-30 河南农业大学 Disaster prediction method and system based on support vector machine and gray BP neural network
CN115358475A (en) * 2022-08-29 2022-11-18 河南农业大学 Disaster prediction method and system based on support vector machine and gray BP neural network

Similar Documents

Publication Publication Date Title
Jafar et al. Application of Artificial Neural Networks (ANN) to model the failure of urban water mains
Kisi et al. Forecasting daily lake levels using artificial intelligence approaches
CN103711523B (en) Based on the gas density real-time predicting method of local decomposition-Evolutionary Neural Network
Li et al. A large-scale sensor missing data imputation framework for dams using deep learning and transfer learning strategy
Allawi et al. Novel reservoir system simulation procedure for gap minimization between water supply and demand
Dong et al. An integrated deep neural network approach for large-scale water quality time series prediction
Xu et al. Model and algorithm of BP neural network based on expanded multichain quantum optimization
Li et al. A data-driven prediction model for maximum pitting corrosion depth of subsea oil pipelines using SSA-LSTM approach
Li et al. Evolutionary deep learning with extended Kalman filter for effective prediction modeling and efficient data assimilation
Kerwin et al. Performance comparison for pipe failure prediction using artificial neural networks
Chen et al. Transfer life prediction of gears by cross-domain health indicator construction and multi-hierarchical long-term memory augmented network
Jia et al. Water quality prediction method based on LSTM-BP
Ladanu et al. Enhancing artificial neural network with multi-objective evolutionary algorithm for optimizing real time reservoir operations: a review
Gao et al. Gas outburst prediction based on the intelligent Dempster-Shafer evidence theory
Chen et al. Health diagnosis of concrete dams with continuous missing data for assessing structural deformation based on tSNE–AHC algorithm and deep transfer learning
CN103258243A (en) Tube explosion predicting method based on grey neural network
Ma et al. A grey forecasting model based on BP neural network for crude oil production and consumption in China
CN111414927A (en) Method for evaluating seawater quality
Jun et al. Deep leaning neural networks for determining replacement timing of steel water transmission pipes
Reddy et al. An optimal neural network model for software effort estimation
Wu et al. Prediction of energy consumption time series using Neural Networks combined with exogenous series
CN103258243B (en) Tube explosion prediction method based on grey neural network
Zhang et al. BP neural network model based on the K-means clustering to predict the share price
CN106529725A (en) Gas outburst prediction method based on firefly algorithm and SOM network
Jalalkamali et al. Application of hybrid neural modeling and radial basis function neural network to estimate leakage rate in water distribution network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20130821

Assignee: HANGZHOU ZHONGZI FENGTAI ENVIRONMENT TECHNOLOGY Co.,Ltd.

Assignor: HANGZHOU DIANZI University

Contract record no.: X2020330000109

Denomination of invention: Prediction method of tube burst based on Grey Neural Network

Granted publication date: 20161130

License type: Common License

Record date: 20201129