CN107590565A - A kind of method and device for building building energy consumption forecast model - Google Patents

A kind of method and device for building building energy consumption forecast model Download PDF

Info

Publication number
CN107590565A
CN107590565A CN201710806517.3A CN201710806517A CN107590565A CN 107590565 A CN107590565 A CN 107590565A CN 201710806517 A CN201710806517 A CN 201710806517A CN 107590565 A CN107590565 A CN 107590565A
Authority
CN
China
Prior art keywords
factor
influence
neural network
network training
training pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710806517.3A
Other languages
Chinese (zh)
Other versions
CN107590565B (en
Inventor
宋扬
官泽
孔祥旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shougang Automation Information Technology Co Ltd
Original Assignee
Beijing Shougang Automation Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shougang Automation Information Technology Co Ltd filed Critical Beijing Shougang Automation Information Technology Co Ltd
Priority to CN201710806517.3A priority Critical patent/CN107590565B/en
Publication of CN107590565A publication Critical patent/CN107590565A/en
Application granted granted Critical
Publication of CN107590565B publication Critical patent/CN107590565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiments of the invention provide a kind of method and device for building building energy consumption forecast model, method includes:Obtain energy consumption factor set;It is classified as linearly related factor of influence set and nonlinear correlation factor of influence set;Corresponding Bayesian network model is built respectively;First main affecting factors, the first non-principal factor of influence, the second main affecting factors and the second non-principal factor of influence are grouped into based on the corresponding Bayesian network model;Build each BP neural network training pattern;Based on training sample data, each BP neural network training pattern is trained respectively;Based on default test sample data, inspection is predicted to each BP neural network training pattern after training respectively, exports prediction result value;If the error of prediction result value is in default error range, the energy consumption forecast model of the output linearity relative influence factor and the energy consumption forecast model of nonlinear correlation factor of influence.

Description

A kind of method and device for building building energy consumption forecast model
Technical field
The invention belongs to the data analysis technique field of building trade, more particularly to a kind of structure building energy consumption forecast model Method and device.
Background technology
Building energy consumption trend analysis has lasted for many years as the focus of lot of domestic and foreign scholar's research, regardless of whether being Which kind of analysis method, all lack the narration for the critical impact factor in uncertain energy consumption system, therefore become to energy consumption When gesture is predicted, precision of prediction is low, cause can not prediction of energy consumption exactly trend.
The content of the invention
The problem of existing for prior art, the embodiments of the invention provide a kind of side for building building energy consumption forecast model Method and device, when being predicted in the prior art to building energy consumption trend for solution, the low technical problem of precision of prediction.
The embodiment of the present invention provides a kind of method for building building energy consumption forecast model, and methods described includes:
Building priori data is obtained, energy consumption factor set is obtained based on the priori data;
The energy consumption factor is classified, the energy consumption factor is divided into linearly related factor of influence set And nonlinear correlation factor of influence set;
The linearly related factor of influence set and the nonlinear correlation factor of influence set are built respectively corresponding Bayesian network model;
First master in the linearly related factor of influence set is determined based on the corresponding Bayesian network model respectively Want in factor of influence, the first non-principal factor of influence, the nonlinear correlation factor of influence set the second main affecting factors and Second non-principal factor of influence;
The training of first BP neural network is built based on first main affecting factors, the first non-principal factor of influence Model;The 2nd BP nerves are built based on pretreated second main affecting factors and the second non-principal factor of influence Network training model;
Training sample data are obtained, based on the training sample data, mould is trained to first BP neural network respectively Type and the second BP neural network training pattern are trained;
Based on default test sample data, respectively to the first BP neural network training pattern and second after training BP neural network training pattern is predicted inspection, exports prediction result value;
The error of the prediction result value is judged whether in default error range, if the error of the prediction result value In default error range, export the energy consumption forecast model of the linearly related factor of influence and the nonlinear correlation influence because The energy consumption forecast model of son.
It is described based on pretreated first main affecting factors, the first non-principal influence in such scheme The factor builds the first BP neural network training pattern;Based on pretreated second main affecting factors and described second non- Before main affecting factors build the second BP neural network training pattern, including:
To first main affecting factors, the first non-principal factor of influence, second main affecting factors and The second non-principal factor of influence carries out being ashed pretreatment and normalization pretreatment.
In such scheme, it is described based on the corresponding Bayesian network model determine respectively it is described it is linearly related influence because In subclass second in the first main affecting factors, the first non-principal factor of influence, the nonlinear correlation factor of influence set Main affecting factors and the second non-principal factor of influence, including:
The probability distribution of each node in directed acyclic graph is calculated respectively in corresponding Bayesian network model, based on described Probability distribution obtains the relative weight of each node respectively;Each node is corresponding with each factor of influence;
First main affecting factors, described first non-master are determined according to the relative weight of each factor of influence respectively Want factor of influence, second main affecting factors and the second non-principal factor of influence.
In such scheme, the variable bag of the first BP neural network training pattern and the second BP neural network training pattern Include:
Input layer number is n, node in hidden layer p, output layer nodes q;
Study precision is ε, and maximum study number is M;
The hidden layer input weights are wih, the hidden layer output weights are who;Each Node B threshold of hidden layer is bh, each Node B threshold of output layer is bo
Activation primitive is
The error function isThe yooTo be any one in the output vector of output layer Individual vector, the doFor any one vector in anticipated output vector;
Input vector is x=(x1, x2..., xn);
The anticipated output vector is d=(d1, d2..., dq);
The input vector of the hidden layer is hi=(hi1, hi2..., hip);
The output vector of the hidden layer is ho=(ho1, ho2..., hop);
The input vector of the output layer is yi=(yi1, yi2..., yiq);
The output vector of the output layer is yo=(yo1, yo2..., yoq)。
In such scheme, the acquisition training sample data, including:
Respectively by pretreated first main affecting factors of the normalization, the first non-principal influence because Sub, described second main affecting factors and the data sequence of the second non-principal factor of influence are segmented, and form n m+1 Length, the data segment that has coincidence;Each data segment is a training sample data.
It is described to be based on the training sample data in such scheme, respectively to the first BP neural network training pattern And second BP neural network training pattern be trained, including:
Determine that the error function is trained to the first BP neural network training pattern and the second BP neural network respectively The partial derivative δ of each node of output layer of modelo
Determine that the error function is trained to the first BP neural network training pattern and the second BP neural network respectively Partial derivative-the δ of each node of model hidden layerh
It is utilized respectively the partial derivative δ of each node of the output layeroAnd hohIt is w to correct the hidden layer output weightsho
It is utilized respectively the partial derivative-δ of each node of the hidden layerhAnd xiCorrect the input weight w of the hidden layerih;Institute State xiFor any one node in input layer in corresponding BP neural network model.
It is described to be based on default test sample data in such scheme, respectively to the first BP nerve nets after training Network training pattern and the second BP neural network training pattern are predicted, and export prediction result value, including:
Based on the test sample data, normalized function is utilizedResolving inversely, output is once The test sample data after reduction;The xmaxFor the maximum in test sample data sequence, the xminFor test specimens Minimum value in notebook data sequence;
Original function is gone back using ashingTo the test sample number after once reducing According to secondary reduction, the test sample data after secondary reduction are exported;
Based on the test sample data after the secondary reduction, respectively to the first BP neural network training pattern And second BP neural network training pattern be predicted, export prediction result value.
In such scheme, the test sample data based on after the secondary reduction, respectively to the first BP Neural network training model and the second BP neural network training pattern are predicted, and export prediction result value, including:
Based on the test sample data after the secondary reduction, the first BP neural network training pattern is carried out Prediction, export the first prediction result value;
Based on the test sample data after the secondary reduction, the second BP neural network training pattern is carried out Prediction, export the second prediction result value;
Function and whitening processing function are handled respectively to the first prediction result value and described second using renormalization Prediction result value is handled, and obtains the first predicted value and the second predicted value;
It is fitted using the first predicted value described in linear regression function pair and second predicted value, obtains the prediction End value.
The embodiment of the present invention also provides a kind of device for building building energy consumption forecast model, and described device includes:
Acquiring unit, for obtaining priori building data, energy consumption factor set is obtained based on the priori data;
Taxon, for classifying to the energy consumption factor, the energy consumption factor is divided into linear phase Close factor of influence set and nonlinear correlation factor of influence set;
First construction unit, for the linearly related factor of influence set and the nonlinear correlation are influenceed respectively because Subclass builds corresponding Bayesian network model;
Determining unit, for determining the linearly related factor of influence respectively based on the corresponding Bayesian network model Second master in first main affecting factors, the first non-principal factor of influence, the nonlinear correlation factor of influence set in set Want factor of influence and the second non-principal factor of influence;
Second construction unit, for based on first main affecting factors, the first non-principal factor of influence structure First BP neural network training pattern;Based on pretreated second main affecting factors and the second non-principal influence The factor builds the second BP neural network training pattern;
Training unit, for obtaining training sample data, based on the training sample data, respectively to the first BP god It is trained through network training model and the second BP neural network training pattern;
Predicting unit, for based on default test sample data, respectively to first BP neural network after training Training pattern and the second BP neural network training pattern are predicted inspection, export prediction result value;
Output unit, for judging the error of the prediction result value whether in default error range, if described pre- The error of end value is surveyed in default error range, exports the energy consumption forecast model of the linearly related factor of influence and described non- The energy consumption forecast model of linearly related factor of influence.
In such scheme, described device also includes:Pretreatment unit, for being based on pretreatment in second construction unit First main affecting factors afterwards, the first non-principal factor of influence build the first BP neural network training pattern;Base The second BP neural network instruction is built in pretreated second main affecting factors and the second non-principal factor of influence Before practicing model, to first main affecting factors, the first non-principal factor of influence, second main affecting factors And the second non-principal factor of influence carries out being ashed pretreatment and normalization pretreatment.
The embodiments of the invention provide a kind of method and device of building energy consumption forecast model, methods described includes:Obtain Priori data, energy consumption factor set is obtained based on the priori data;The energy consumption factor is classified, by institute State the energy consumption factor and be divided into linearly related factor of influence set and nonlinear correlation factor of influence set;Respectively to described linear The set of the relative influence factor and the nonlinear correlation factor of influence set build corresponding Bayesian network model;Based on corresponding The Bayesian network model determine the first main affecting factors in the linearly related factor of influence set, first non-respectively In main affecting factors, the nonlinear correlation factor of influence set the second main affecting factors and the second non-principal influence because Son;First BP neural network training pattern is built based on first main affecting factors, the first non-principal factor of influence; The second BP neural network is built based on pretreated second main affecting factors and the second non-principal factor of influence Training pattern;Training sample data are obtained, based on the training sample data, mould is trained to first BP neural network respectively Type and the second BP neural network training pattern are trained;Based on default test sample data, respectively to described in after training First BP neural network training pattern and the second BP neural network training pattern are predicted inspection, export prediction result value;Sentence Whether the error for the prediction result value of breaking is in default error range, if the error of the prediction result value is in default mistake Poor scope, the energy consumption forecast model of the output linearly related factor of influence and the energy consumption of the nonlinear correlation factor of influence are pre- Survey model;In this way, using Bayesian network model the chief factor can be obtained from more building Effects of Factors events, that is, build Build the main affecting factors of energy consumption;Recycle BP neural network training pattern constantly to train, forecast test, obtain approaching to reality number According to the energy consumption forecast model of fitting degree, and then the precision of prediction of building energy consumption forecast model can be improved, Accurate Prediction building energy The trend of consumption.
Brief description of the drawings
Fig. 1 is the method flow schematic diagram for the structure building energy consumption forecast model that the embodiment of the present invention one provides;
Fig. 2 is the overall schematic for the structure building energy consumption forecast model that the embodiment of the present invention two provides.
Embodiment
During in order to solve in the prior art to be predicted building energy consumption trend, the low technical problem of precision of prediction, this hair Bright to provide a kind of method for building building energy consumption forecast model, methods described includes:Priori data is obtained, based on the priori Data acquisition energy consumption factor set;The energy consumption factor is classified, the energy consumption factor is divided into line Property relative influence factor set and nonlinear correlation factor of influence set;Respectively to the linearly related factor of influence set and institute State the set of nonlinear correlation factor of influence and build corresponding Bayesian network model;Based on the corresponding Bayesian network model The first main affecting factors in the linearly related factor of influence set, the first non-principal factor of influence, described non-are determined respectively Second main affecting factors and the second non-principal factor of influence in linearly related factor of influence set;Based on the described first main shadow Ring the factor, the first non-principal factor of influence builds the first BP neural network training pattern;Based on pretreated described Two main affecting factors and the second non-principal factor of influence build the second BP neural network training pattern;Obtain training sample Data, based on the training sample data, the first BP neural network training pattern and the second BP neural network are instructed respectively Practice model to be trained;Based on default test sample data, mould is trained to first BP neural network after training respectively Type and the second BP neural network training pattern are predicted inspection, export prediction result value;Judge the mistake of the prediction result value Whether difference is in default error range, if the error of the prediction result value is described linear in default error range, output The energy consumption forecast model of the energy consumption forecast model of the relative influence factor and the nonlinear correlation factor of influence.
Technical scheme is described in further detail below by drawings and the specific embodiments.
Embodiment one
The present embodiment provides a kind of method for building building energy consumption forecast model, as shown in figure 1, methods described includes:
S101, obtains building priori data, and energy consumption factor set is obtained based on the priori data;
In this step, it is necessary first to obtain building priori data, energy consumption factor set is obtained based on the priori data Close.
Then the energy consumption factor is classified, specifically, according to the energy consumption factor set that has obtained and The data distribution of each factor, decide whether to take normalization and ashing processing, by pretreated each factor respectively and power consumption values First-order linear regression fit analysis is done, the energy consumption factor is divided into by linearly related factor of influence collection by linear relationship Conjunction and nonlinear correlation factor of influence set.
S102, phase is built to the linearly related factor of influence set and the nonlinear correlation factor of influence set respectively The Bayesian network model answered;
After the energy consumption factor is divided into linearly related factor of influence set and nonlinear correlation factor of influence set, it is based on The priori data obtains in linearly related factor of influence set and nonlinear correlation factor of influence set between each factor respectively Incidence relation, corresponding Bayesian network model is then built according to the relation between each factor respectively.
For theory, the Bayesian network model is made up of directed acyclic graph.The directed acyclic graph is G= (I, E), the I are the set of all nodes, and E is the set of directed connection line segment.Make XiA point i's is random in expression point set I Variable, and remember that point set I stochastic variable set representations are X={ xi, i ∈ I }, if X joint probability can be expressed as such as formula (1) institute Show, then then directed acyclic graph is referred to as that G forms a Bayesian network model.
It is described in formula (1)pa(i)Represent the father node of node i.
Accordingly for arbitrary stochastic variable (arbitrary node), its probability distribution can be by respective local condition Probability distribution, which is multiplied, to be drawn, as shown in formula (2):
p(x1, x2..., xk)=p (xk|x1, x2..., xk-1)…p(x2|x1)p(x1) (2)
So it is based on probability distribution, the relative weight of each node is with regard to that can calculate in directed acyclic graph.
S103, the is determined in the linearly related factor of influence set respectively based on the corresponding Bayesian network model In one main affecting factors, the first non-principal factor of influence, the nonlinear correlation factor of influence set second it is main influence because Son and the second non-principal factor of influence;
In this step, after the relative weight for calculating each node, because each node is corresponding with each factor of influence, because This correspondingly just gets the relative weight of each factor of influence, true further according to the relative weight difference of each factor of influence Fixed first main affecting factors, the first non-principal factor of influence, it is exactly second main affecting factors and institute State the second non-principal factor of influence.Wherein, the relative weight of first main affecting factors is linearly related factor of influence collection The maximum factor of influence of relative weight in conjunction, correspondingly, the other influences factor is just the first non-principal factor of influence.Described second The relative weight of main affecting factors is the factor of influence that relative weight is maximum in nonlinear correlation factor of influence set, accordingly Ground, the other influences factor are just the second non-principal factor of influence.
S104, the first BP nerve nets are built based on first main affecting factors, the first non-principal factor of influence Network training pattern;Based on pretreated second main affecting factors and the second non-principal factor of influence structure second BP neural network training pattern;
Determine the first main affecting factors in the linearly related factor of influence set, the first non-principal factor of influence, In the nonlinear correlation factor of influence set after the second main affecting factors and the second non-principal factor of influence, also need to described First main affecting factors, the first non-principal factor of influence, second main affecting factors and described second non-principal Factor of influence carries out being ashed pretreatment and normalization pretreatment.
Specifically, by taking the first main affecting factors as an example, it is assumed that the original series of the first main affecting factors are:
X(0)={ X(0)(1), X(0)(2)…X(0)(n)}
Then the sequence of one-accumulate generation is:
X(1)={ X(1)(1), X(1)(2)…X(1)(n)}
Wherein,
Make Z(1)For X(1)Close to average, then generate following sequence:
Z(1)=Z(1)(2), Z(1)(3)…Z(1)(n),
Z(1)(k)=0.5 (x(1)(k)+x(1)(k-1)),
Then the Grey Differential Equation model of ashing processing model GM (1,1) is:
X(0)(k)+az(1)(k)=b
NoteSo grey differential equation obtains least-squares estimation parameter satisfaction
Wherein,
So, then can claimAlbefaction equation.
GM (1,1) Grey Differential Equation X can to sum up be calculated(0)(k)+az(1)(k)=b time series is:
Reduced equation (albefaction) after being so ashed, also can ashing go back original function and be referred to as
Ashing thus can be carried out to original series to handle.
Then the data after handling ashing are normalized:
Specifically utilize formulaBy input data normalization in [- 1,1] section, wherein, this Shi Suoshu xmaxFor the first main maximum influenceed in shadow data sequence, the xminFor the first main influence shadow data sequence Minimum value in row, y are the data that draw after pretreatment, the input data be after normalized first it is main influence because The data sequence of son.
Likewise it is possible in the same manner to the first non-principal main affecting factors of factor of influence second and described Two non-principal factors of influence are pre-processed.
It is then based on pretreated first main affecting factors, the first non-principal factor of influence structure first BP neural network training pattern;Based on pretreated second main affecting factors and the second non-principal factor of influence Build the second BP neural network training pattern.
Here, the first BP neural network training pattern and the second BP neural network training pattern are identical structures, All include:Input layer, hidden layer and output layer.
The variable of the first BP neural network training pattern and the second BP neural network training pattern includes:
Input layer number is n, node in hidden layer p, output layer nodes q;
Study precision is ε, and maximum study number is M;
The hidden layer input weights are wih, the hidden layer output weights are who;Each Node B threshold of hidden layer is bh, each Node B threshold of output layer is bo
Activation primitive is
The error function isThe yooTo be any in the output vector of output layer One vector, the doFor any one vector in anticipated output vector;
Input vector is x=(x1, x2..., xn);
Anticipated output vector is d=(d1, d2..., dq);
The input vector of the hidden layer is hi=(hi1, hi2..., hip);
The output vector of the hidden layer is ho=(ho1, ho2..., hop);
The input vector of the output layer is yi=(yi1, yi2..., yiq);
The output vector of the output layer is yo=(yo1, yo2..., yoq)。
S105, training sample data are obtained, based on the training sample data, first BP neural network is instructed respectively Practice model and the second BP neural network training pattern is trained;
After the first BP neural network training pattern and the second BP neural network training pattern are built in this step, also need Training sample data are obtained, based on the training sample data, respectively to the first BP neural network training pattern and second BP neural network training pattern is trained.
Specifically, respectively by pretreated first main affecting factors of the normalization, described first non-principal The data sequence of factor of influence, second main affecting factors and the second non-principal factor of influence is segmented, and forms n Individual m+1 length, the data segment that has coincidence;Each data segment is a training sample data.Input bit is the preceding m moment Value, carry-out bit are the value at m+1 moment, gradually push ahead and are configured with the sample matrix of duplicate data section (sample matrix is n rows M+1 is arranged);
The sample matrix row training added into each training pattern after segment processing, carries out exporting calculating and backpropagation meter Output is calculated, it is for error correction that the backpropagation, which calculates,;Wherein, the backpropagation is calculated and included:
Determine that the error function is trained to the first BP neural network training pattern and the second BP neural network respectively The partial derivative δ of each node of output layer of modelo;Determine the error function to the first BP neural network training pattern respectively And second each node of BP neural network training pattern hidden layer partial derivative-δh;It is utilized respectively the inclined of each node of the output layer Derivative δoAnd the output valve ho of each node of hidden layerh, it is w to correct the hidden layer output weightsho;H is the arbitrary value in 0 to p; It is utilized respectively the partial derivative-δ of each node of the hidden layerhAnd xiCorrect the input weight w of the hidden layerih;The xiTo be corresponding BP neural network model in any one node in input layer, it is exactly one to correspond in corresponding Bayesian network model Factor of influence.
The first BP neural network training pattern and the second BP neural network training pattern are preserved after amendment.
S106, based on default test sample data, respectively to the first BP neural network training pattern after training And second BP neural network training pattern be predicted, export prediction result value;Judge the prediction result value error whether In default error range, if the error of the prediction result value is in default error range, the output linearly related shadow Ring the energy consumption forecast model of the factor and the energy consumption forecast model of the nonlinear correlation factor of influence.
In this step, the test sample data of prediction are obtained, as the first BP neural network training pattern and second Input data in BP neural network training pattern, the first BP neural network training pattern and the second BP neural network training Model carries out the test sample data after renormalization processing and whitening processing output reduction to test sample data.The reduction Test sample data afterwards are not by the data by pretreatment.
Specifically, based on the test sample data, normalized function is utilizedCarry out anti-normalizing Change, i.e. resolving inversely, export the test sample data after once reducing;Now, the xmaxFor test sample data sequence In maximum, the xminFor the minimum value in test sample data sequence;
Original function (equation) is gone back using ashingTo the test after once reducing Sample data secondary reduction, export the test sample data after secondary reduction;
The test sample data after the secondary reduction are then based on, first BP neural network is trained respectively Model and the second BP neural network training pattern are predicted, and export prediction result value, including:
Based on the test sample data after the secondary reduction, the first BP neural network training pattern is carried out Prediction, export the first prediction result value;
Based on the test sample data after the secondary reduction, the second BP neural network training pattern is carried out Prediction, export the second prediction result value;
Function and whitening processing function are handled respectively to the first prediction result value and described second using renormalization Prediction result value is handled, and obtains the first predicted value and the second predicted value;
It is fitted using the first predicted value described in linear regression function pair and second predicted value, obtains fitting result Whether value, i.e. prediction result value, on the basis of actual building energy consumption value, judge the error of prediction result value in default error model In enclosing, if the error of the prediction result value in the rate of accuracy reached of default error range or prediction result value at least 90%, that is, fitting result value is exported as energy consumption predicted value.The energy consumption for exporting the linearly related factor of influence simultaneously is pre- Survey the energy consumption forecast model of model and the nonlinear correlation factor of influence, the first main affecting factors and second it is main influence because Son.
If the error of prediction result value is not in default error range, on the basis of actual building energy consumption value, again The study precision of the first BP neural network training pattern and the second BP neural network training pattern is set, learns number, is hidden Input weights and hidden layer output weights containing layer, form new the first BP neural network training pattern and the second BP neural network Training pattern, mould is trained to the first BP neural network training pattern and the second BP neural network again according to method same above Type is trained, predicted, draws new prediction result value, until prediction result value error in default error range, The energy consumption forecast model of the final linearly related factor of influence of output and the energy consumption of the nonlinear correlation factor of influence are pre- Model is surveyed, and exports the first main affecting factors and the second main affecting factors.
Embodiment two
Corresponding to embodiment one, the present embodiment provides a kind of device for building building energy consumption forecast model, as shown in Fig. 2 Described device includes:Acquiring unit 21, taxon 22, the first construction unit 23, determining unit 24, the second construction unit 25, Training unit 26, predicting unit 27, output unit 28 and pretreatment unit 29;Wherein,
Acquiring unit 21 is used to obtain priori data first, and energy consumption factor set is obtained based on the priori data.
Taxon 22 is used to classify to the energy consumption factor, specifically, according to the energy consumption obtained Factor set and the data distribution of each factor, decide whether to take normalization and ashing processing, by pretreated each Factor minute Not and power consumption values do first-order linear regression fit analysis, and the energy consumption factor is divided into linear correlation by linear relationship Factor of influence set and nonlinear correlation factor of influence set.
The energy consumption factor is divided into linearly related factor of influence set and nonlinear correlation factor of influence by taxon 22 After set, the first construction unit 23 is used to obtain linearly related factor of influence set and non-linear respectively based on the priori data Incidence relation in relative influence factor set between each factor, then built respectively accordingly according to the relation between each factor Bayesian network model.
For theory, the Bayesian network model is made up of directed acyclic graph.The directed acyclic graph is G= (I, E), the I are the set of all nodes, and E is the set of directed connection line segment.Make XiA point i's is random in expression point set I Variable, and remember that point set I stochastic variable set representations are X={ xi, i ∈ I }, if X joint probability can be expressed as such as formula (1) institute Show, then then directed acyclic graph is referred to as that G forms a Bayesian network model.
It is described in formula (1)pa(i)Represent the father node of node i.
Accordingly for arbitrary stochastic variable (arbitrary node), its probability distribution can be by respective local condition Probability distribution, which is multiplied, to be drawn, as shown in formula (2):
p(x1, x2..., xk)=p (xk|x1, x2..., xk-1)…p(x2|x1)p(x1) (2)
So it is based on probability distribution, the relative weight of each node is with regard to that can calculate in directed acyclic graph.
Correspondingly, after the relative weight of each node is with regard to that can calculate, because each node is corresponding with each factor of influence , it is thus determined that unit 24 correspondingly just gets the relative weight of each factor of influence, further according to each factor of influence Relative weight determine first main affecting factors, the first non-principal factor of influence, the second main shadow respectively Ring the factor and the second non-principal factor of influence.Wherein, the relative weight of first main affecting factors is linearly related The maximum factor of influence of relative weight in factor of influence set, correspondingly, the other influences factor just for the first non-principal influence because Son.The relative weight of second main affecting factors is the influence that relative weight is maximum in nonlinear correlation factor of influence set The factor, correspondingly, the other influences factor are just the second non-principal factor of influence.
So pretreatment unit 29 is used for first main affecting factors, the first non-principal factor of influence, institute State the second main affecting factors and the second non-principal factor of influence carries out being ashed pretreatment and normalization pretreatment.
Specifically, by taking the first main affecting factors as an example, it is assumed that the original series of the first main affecting factors are:
X(0)={ X(0)(1), X(0)(2)…X(0)(n)}
Then the sequence of one-accumulate generation is:
X(1)={ X(1)(1), X(1)(2)…X(1)(n)}
Wherein,
Make Z(1)For X(1)Close to average, then generate following sequence:
Z(1)=Z(1)(2), Z(1)(3)…Z(1)(n),
Z(1)(k)=0.5 (x(1)(k)+x(1)(k-1)),
Then the Grey Differential Equation model of ashing processing model GM (1,1) is:
X(0)(k)+az(1)(k)=b
NoteSo grey differential equation obtains least-squares estimation parameter satisfaction
Wherein,
So, then can claimFor X(0)(k)+az(1)(k)=b albefaction equation.
GM (1,1) Grey Differential Equation X can to sum up be calculated(0)(k)+az(1)(k)=b time series is:
So be ashed after reduced equation (albefaction) be
Ashing thus can be carried out to original series to handle.
Then the data after handling ashing are normalized:
Specifically utilize formulaBy input data and normalization in [- 1,1] section, wherein, The xmaxFor the first main maximum influenceed in shadow data sequence, the xminFor the first main influence shadow data sequence In minimum value, y is the data that draw after pretreatment, and the input data is the first main affecting factors after normalized Data sequence.
Similarly, pretreatment unit 29 can mainly influence on the first non-principal factor of influence second in the same manner The factor and the second non-principal factor of influence are pre-processed.
And then the can of the second construction unit 25 is based on first main affecting factors, the first non-principal influence The factor builds the first BP neural network training pattern;Based on pretreated second main affecting factors and described second non- Main affecting factors build the second BP neural network training pattern.
Here, it is identical structure to state the first BP neural network training pattern and the second BP neural network training pattern, all Including:Input layer, hidden layer and output layer.
The variable of the first BP neural network training pattern and the second BP neural network training pattern includes:
Input layer number is n, node in hidden layer p, output layer nodes q;
Study precision is ε, and maximum study number is M;
The hidden layer input weights are wih, the hidden layer output weights are who;Each Node B threshold of hidden layer is bh, each Node B threshold of output layer is bo
Activation primitive is
The error function isThe yooTo be any one in the output vector of output layer Individual vector, the doFor any one vector in anticipated output vector;
Input vector is x=(x1, x2..., xn);
Anticipated output vector is d=(d1, d2..., dq);
The input vector of the hidden layer is hi=(hi1, hi2..., hip);
The output vector of the hidden layer is ho=(ho1, ho2..., hop);
The input vector of the output layer is yi=(yi1, yi2..., yiq);
The output vector of the output layer is yo=(yo1, yo2..., yoq)。
After first BP neural network training pattern and the second BP neural network training pattern are built, training unit 26 is used In obtaining training sample data, based on the training sample data, respectively to the first BP neural network training pattern and the Two BP neural network training patterns are trained.
Specifically, training unit 26 is respectively by pretreated first main affecting factors of the normalization, described The data sequence of first non-principal factor of influence, second main affecting factors and the second non-principal factor of influence is carried out Segmentation, data segment forming n m+1 length, having coincidence;Each data segment is a training sample data.Input bit is preceding m The value at individual moment, carry-out bit are the value at m+1 moment, gradually push ahead the sample matrix (sample for being configured with duplicate data section Matrix arranges for n rows m+1);
The sample matrix row training added into each training pattern after segment processing, carries out exporting calculating and backpropagation meter Output is calculated, it is for error correction that the backpropagation, which calculates,;Wherein, the backpropagation is calculated and included:
Determine that the error function is trained to the first BP neural network training pattern and the second BP neural network respectively The partial derivative δ of each node of output layer of modelo;Determine the error function to the first BP neural network training pattern respectively And second each node of BP neural network training pattern hidden layer partial derivative-δh;It is utilized respectively the inclined of each node of the output layer Derivative δoAnd the output valve ho of each node of hidden layerhIt is w to correct the hidden layer output weightsho;H is the arbitrary value in 0 to p; It is utilized respectively the partial derivative-δ of each node of the hidden layerhAnd xiCorrect the input weight w of the hidden layerih;The xiTo be corresponding BP neural network model in any one node in input layer, it is exactly one to correspond in corresponding Bayesian network model Factor of influence.
The first BP neural network training pattern and the second BP neural network training pattern are preserved after amendment.
So predicting unit 27 is used to be based on default test sample data, respectively to the first BP nerves after training Network training model and the second BP neural network training pattern are predicted, and export prediction result value;
Specifically, predicting unit 27 is based on the test sample data, utilizes normalized function Renormalization, i.e. resolving inversely are carried out, exports the test sample data after once reducing;The xmaxFor test sample number According to the maximum in sequence, the xminFor the minimum value in test sample data sequence;
Original function (equation) is gone back using ashingTo the test after once reducing Sample data secondary reduction, export the test sample data after secondary reduction;Test sample data after the reduction are only It is not by the data by pretreatment.
The test sample data after the secondary reduction are then based on, first BP neural network is trained respectively Model and the second BP neural network training pattern are predicted, and export prediction result value, including:
Based on the test sample data after the secondary reduction, the first BP neural network training pattern is carried out Prediction, export the first prediction result value;
Based on the test sample data after the secondary reduction, the second BP neural network training pattern is carried out Prediction, export the second prediction result value;
Function and whitening processing function are handled respectively to the first prediction result value and described second using renormalization Prediction result value is handled, and obtains the first predicted value and the second predicted value;
It is fitted using the first predicted value described in linear regression function pair and second predicted value, obtains fitting result Value, i.e. prediction result value.
Whether the error that output unit 28 is used to judge the prediction result value is in default error range, if described pre- The error of end value is surveyed in default error range, exports the energy consumption forecast model of the linearly related factor of influence and described non- The energy consumption forecast model of linearly related factor of influence.
Specifically, whether output unit 28 judges the error of prediction result value pre- on the basis of actual building energy consumption value If error range in, if the error of the prediction result value is in default error range or the rate of accuracy reached of prediction result value To at least 90%, that is, fitting result value is exported as energy consumption predicted value.Export the linearly related factor of influence simultaneously The energy consumption forecast model of energy consumption forecast model and the nonlinear correlation factor of influence, the first main affecting factors and second are main Factor of influence.
If the error of prediction result value is not in default error range, on the basis of actual building energy consumption value, again The study precision of the first BP neural network training pattern and the second BP neural network training pattern is set, learns number, is hidden Input weights and hidden layer output weights containing layer, form new the first BP neural network training pattern and the second BP neural network Training pattern, mould is trained to the first BP neural network training pattern and the second BP neural network again according to method same above Type is trained, predicted, draws new prediction result value, until prediction result value error in default error range, The energy consumption forecast model of the final linearly related factor of influence of output and the energy consumption of the nonlinear correlation factor of influence are pre- Model is surveyed, and exports the first main affecting factors and the second main affecting factors.
The beneficial effect that the method and device of structure building energy consumption forecast model provided in an embodiment of the present invention can be brought is extremely It is less:
The embodiments of the invention provide a kind of method and device of building energy consumption forecast model, methods described includes:Obtain Priori data, energy consumption factor set is obtained based on the priori data;The energy consumption factor is classified, by institute State the energy consumption factor and be divided into linearly related factor of influence set and nonlinear correlation factor of influence set;Respectively to described linear The set of the relative influence factor and the nonlinear correlation factor of influence set build corresponding Bayesian network model;Based on corresponding The Bayesian network model determine the first main affecting factors in the linearly related factor of influence set, first non-respectively In main affecting factors, the nonlinear correlation factor of influence set the second main affecting factors and the second non-principal influence because Son;First BP neural network training pattern is built based on first main affecting factors, the first non-principal factor of influence; The second BP neural network is built based on pretreated second main affecting factors and the second non-principal factor of influence Training pattern;Training sample data are obtained, based on the training sample data, mould is trained to first BP neural network respectively Type and the second BP neural network training pattern are trained;Based on default test sample data, respectively to described in after training First BP neural network training pattern and the second BP neural network training pattern are predicted inspection, export prediction result value;Sentence Whether the error for the prediction result value of breaking is in default error range, if the error of the prediction result value is in default mistake Poor scope, the energy consumption forecast model of the output linearly related factor of influence and the energy consumption of the nonlinear correlation factor of influence are pre- Survey model;In this way, using Bayesian network model the chief factor can be obtained from more building Effects of Factors events, that is, build Build the main affecting factors of energy consumption;Recycle BP neural network training pattern constantly to train, predict, obtain approaching to reality data and intend The energy consumption forecast model of conjunction degree, and then the precision of prediction of building energy consumption forecast model can be improved, Accurate Prediction building energy consumption Trend.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention, it is all All any modification, equivalent and improvement made within the spirit and principles in the present invention etc., it should be included in the protection of the present invention Within the scope of.

Claims (10)

  1. A kind of 1. method for building building energy consumption forecast model, it is characterised in that methods described includes:
    Building priori data is obtained, energy consumption factor set is obtained based on the priori data;
    The energy consumption factor is classified, the energy consumption factor is divided into linearly related factor of influence set and non- Linearly related factor of influence set;
    Corresponding pattra leaves is built to the linearly related factor of influence set and the nonlinear correlation factor of influence set respectively This network model;
    First main shadow in the linearly related factor of influence set is determined based on the corresponding Bayesian network model respectively Ring the second main affecting factors and second in the factor, the first non-principal factor of influence, the nonlinear correlation factor of influence set Non-principal factor of influence;
    First BP neural network training mould is built based on first main affecting factors, the first non-principal factor of influence Type;The 2nd BP nerve nets are built based on pretreated second main affecting factors and the second non-principal factor of influence Network training pattern;
    Obtain training sample data, based on the training sample data, respectively to the first BP neural network training pattern and Second BP neural network training pattern is trained;
    Based on default test sample data, respectively to the first BP neural network training pattern after training and the 2nd BP god Inspection is predicted through network training model, exports prediction result value;
    The error of the prediction result value is judged whether in default error range, if the error of the prediction result value is pre- If error range, export the energy consumption forecast model of the linearly related factor of influence and the nonlinear correlation factor of influence Energy consumption forecast model.
  2. 2. the method as described in claim 1, it is characterised in that it is described based on pretreated described first it is main influence because Sub, described first non-principal factor of influence builds the first BP neural network training pattern;Based on pretreated second master Before wanting the second BP neural network training pattern of factor of influence and the second non-principal factor of influence structure, including:
    To first main affecting factors, the first non-principal factor of influence, second main affecting factors and described Second non-principal factor of influence carries out being ashed pretreatment and normalization pretreatment.
  3. 3. the method as described in claim 1, it is characterised in that described true based on the corresponding Bayesian network model difference First main affecting factors, the first non-principal factor of influence, the non-linear phase in the fixed linearly related factor of influence set The second main affecting factors and the second non-principal factor of influence in factor of influence set are closed, including:
    The probability distribution of each node in directed acyclic graph is calculated respectively in corresponding Bayesian network model, based on the probability Distribution obtains the relative weight of each node respectively;Each node is corresponding with each factor of influence;
    First main affecting factors, the first non-principal shadow are determined according to the relative weight of each factor of influence respectively Ring the factor, second main affecting factors and the second non-principal factor of influence.
  4. 4. the method as described in claim 1, it is characterised in that the first BP neural network training pattern and the 2nd BP nerves The variable of network training model includes:
    Input layer number is n, node in hidden layer p, output layer nodes q;
    Study precision is ε, and maximum study number is M;
    The hidden layer input weights are wih, the hidden layer output weights are who;Each Node B threshold of hidden layer is bh, institute It is b to state each Node B threshold of output layero
    Activation primitive is
    The error function isThe yooFor any one in the output vector of output layer to Amount, the doFor any one vector in anticipated output vector;
    Input vector is x=(x1, x2..., xn);
    The anticipated output vector is d=(d1, d2..., dq);
    The input vector of the hidden layer is hi=(hi1, hi2..., hip);
    The output vector of the hidden layer is ho=(ho1, ho2..., hop);
    The input vector of the output layer is yi=(yi1, yi2..., yiq);
    The output vector of the output layer is yo=(yo1, yo2..., yoq)。
  5. 5. method as claimed in claim 2, it is characterised in that the acquisition training sample data, including:
    Respectively by pretreated first main affecting factors of the normalization, the first non-principal factor of influence, institute The data sequence for stating the second main affecting factors and the second non-principal factor of influence is segmented, and forms n m+1 length , the data segment for having coincidence;Each data segment is a training sample data.
  6. 6. method as claimed in claim 4, it is characterised in that it is described to be based on the training sample data, respectively to described One BP neural network training pattern and the second BP neural network training pattern are trained, including:
    Determine the error function to the first BP neural network training pattern and the second BP neural network training pattern respectively Each node of output layer partial derivative δo
    Determine the error function to the first BP neural network training pattern and the second BP neural network training pattern respectively Partial derivative-the δ of each node of hidden layerh
    It is utilized respectively the partial derivative δ of each node of the output layeroAnd hohIt is w to correct the hidden layer output weightsho
    It is utilized respectively the partial derivative-δ of each node of the hidden layerhAnd xiCorrect the input weight w of the hidden layerih;The xiFor Any one node in corresponding BP neural network model in input layer.
  7. 7. the method as described in claim 1, it is characterised in that it is described to be based on default test sample data, respectively to training The first BP neural network training pattern and the second BP neural network training pattern afterwards is predicted, and exports prediction result Value, including:
    Based on the test sample data, normalized function is utilizedResolving inversely, output once reduce The test sample data y afterwards;The xmaxFor the maximum in test sample data sequence, the xminFor test sample number According to the minimum value in sequence;
    Original function is gone back using ashingTo the test sample data two after once reducing Secondary reduction, export the test sample data after secondary reduction;
    Based on the test sample data after the secondary reduction, respectively to the first BP neural network training pattern and Two BP neural network training patterns are predicted, and export prediction result value.
  8. 8. method as claimed in claim 7, it is characterised in that the test sample number based on after the secondary reduction According to, the first BP neural network training pattern and the second BP neural network training pattern are predicted respectively, output prediction End value, including:
    Based on the test sample data after the secondary reduction, the first BP neural network training pattern is carried out pre- Survey, export the first prediction result value;
    Based on the test sample data after the secondary reduction, the second BP neural network training pattern is carried out pre- Survey, export the second prediction result value;
    Function and whitening processing function are handled respectively to the first prediction result value and second prediction using renormalization End value is handled, and obtains the first predicted value and the second predicted value;
    It is fitted using the first predicted value described in linear regression function pair and second predicted value, obtains the prediction result Value.
  9. 9. a kind of device for building building energy consumption forecast model, it is characterised in that described device includes:
    Acquiring unit, for obtaining priori building data, energy consumption factor set is obtained based on the priori data;
    Taxon, for classifying to the energy consumption factor, the energy consumption factor is divided into linearly related shadow Ring factor set and nonlinear correlation factor of influence set;
    First construction unit, for respectively to the linearly related factor of influence set and the nonlinear correlation factor of influence collection Close and build corresponding Bayesian network model;
    Determining unit, for determining the linearly related factor of influence set respectively based on the corresponding Bayesian network model In the second main shadow in the first main affecting factors, the first non-principal factor of influence, the nonlinear correlation factor of influence set Ring the factor and the second non-principal factor of influence;
    Second construction unit, for based on first main affecting factors, the first non-principal factor of influence structure first BP neural network training pattern;Based on pretreated second main affecting factors and the second non-principal factor of influence Build the second BP neural network training pattern;
    Training unit, for obtaining training sample data, based on the training sample data, respectively to the first BP nerve nets Network training pattern and the second BP neural network training pattern are trained;
    Predicting unit, for based on default test sample data, being trained respectively to first BP neural network after training Model and the second BP neural network training pattern are predicted inspection, export prediction result value;
    Output unit, for judging the error of the prediction result value whether in default error range, if the prediction knot The error of fruit value exports the energy consumption forecast model of the linearly related factor of influence and described non-linear in default error range The energy consumption forecast model of the relative influence factor.
  10. 10. device as claimed in claim 9, it is characterised in that described device also includes:Pretreatment unit, for described Second construction unit is based on pretreated first main affecting factors, the first non-principal factor of influence structure first BP neural network training pattern;Based on pretreated second main affecting factors and the second non-principal factor of influence Before building the second BP neural network training pattern, to first main affecting factors, the first non-principal factor of influence, Second main affecting factors and the second non-principal factor of influence carry out being ashed pretreatment and normalization pretreatment.
CN201710806517.3A 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model Active CN107590565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710806517.3A CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710806517.3A CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Publications (2)

Publication Number Publication Date
CN107590565A true CN107590565A (en) 2018-01-16
CN107590565B CN107590565B (en) 2021-01-05

Family

ID=61051121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710806517.3A Active CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Country Status (1)

Country Link
CN (1) CN107590565B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615071A (en) * 2018-05-10 2018-10-02 阿里巴巴集团控股有限公司 The method and device of model measurement
CN108764568A (en) * 2018-05-28 2018-11-06 哈尔滨工业大学 A kind of data prediction model tuning method and device based on LSTM networks
CN109063903A (en) * 2018-07-19 2018-12-21 山东建筑大学 A kind of building energy consumption prediction technique and system based on deeply study
CN109325631A (en) * 2018-10-15 2019-02-12 华中科技大学 Electric car charging load forecasting method and system based on data mining
CN109685252A (en) * 2018-11-30 2019-04-26 西安工程大学 Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model
CN109726936A (en) * 2019-01-24 2019-05-07 辽宁工业大学 A kind of monitoring method rectified a deviation for tilting ancient masonry pagoda
CN110032780A (en) * 2019-02-01 2019-07-19 浙江中控软件技术有限公司 Commercial plant energy consumption benchmark value calculating method and system based on machine learning
CN111062876A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111160598A (en) * 2019-11-13 2020-05-15 浙江中控技术股份有限公司 Energy prediction and energy consumption control method and system based on dynamic energy consumption benchmark
CN111179108A (en) * 2018-11-12 2020-05-19 珠海格力电器股份有限公司 Method and device for predicting power consumption
CN111221880A (en) * 2020-04-23 2020-06-02 北京瑞莱智慧科技有限公司 Feature combination method, device, medium, and electronic apparatus
CN111859500A (en) * 2020-06-24 2020-10-30 广州大学 Method and device for predicting bridge deck elevation of rigid frame bridge
CN112183166A (en) * 2019-07-04 2021-01-05 北京地平线机器人技术研发有限公司 Method and device for determining training sample and electronic equipment
CN112462708A (en) * 2020-11-19 2021-03-09 南京河海南自水电自动化有限公司 Remote diagnosis and optimized scheduling method and system for pump station
CN113552855A (en) * 2021-07-23 2021-10-26 重庆英科铸数网络科技有限公司 Industrial equipment dynamic threshold setting method and device, electronic equipment and storage medium
CN116204566A (en) * 2023-04-28 2023-06-02 深圳市欣冠精密技术有限公司 Digital factory monitoring big data processing system
CN117077854A (en) * 2023-08-15 2023-11-17 广州视声智能科技有限公司 Building energy consumption monitoring method and system based on sensor network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104269849A (en) * 2014-10-17 2015-01-07 国家电网公司 Energy managing method and system based on building photovoltaic micro-grid
CN104331737A (en) * 2014-11-21 2015-02-04 国家电网公司 Office building load prediction method based on particle swarm neural network
CN104597842A (en) * 2015-02-02 2015-05-06 武汉理工大学 BP neutral network heavy machine tool thermal error modeling method optimized through genetic algorithm
CN104765916A (en) * 2015-03-31 2015-07-08 西南交通大学 Dynamics performance parameter optimizing method of high-speed train
CN104834808A (en) * 2015-04-07 2015-08-12 青岛科技大学 Back propagation (BP) neural network based method for predicting service life of rubber absorber
CN105373830A (en) * 2015-12-11 2016-03-02 中国科学院上海高等研究院 Prediction method and system for error back propagation neural network and server
CN105631539A (en) * 2015-12-25 2016-06-01 上海建坤信息技术有限责任公司 Intelligent building energy consumption prediction method based on support vector machine
CN106161138A (en) * 2016-06-17 2016-11-23 贵州电网有限责任公司贵阳供电局 A kind of intelligence automatic gauge method and device
CN106874581A (en) * 2016-12-30 2017-06-20 浙江大学 A kind of energy consumption of air conditioning system in buildings Forecasting Methodology based on BP neural network model
CN106951611A (en) * 2017-03-07 2017-07-14 哈尔滨工业大学 A kind of severe cold area energy-saving design in construction optimization method based on user's behavior
CN106991504A (en) * 2017-05-09 2017-07-28 南京工业大学 Building energy consumption Forecasting Methodology, system and building based on metering separate time series

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104269849A (en) * 2014-10-17 2015-01-07 国家电网公司 Energy managing method and system based on building photovoltaic micro-grid
CN104331737A (en) * 2014-11-21 2015-02-04 国家电网公司 Office building load prediction method based on particle swarm neural network
CN104597842A (en) * 2015-02-02 2015-05-06 武汉理工大学 BP neutral network heavy machine tool thermal error modeling method optimized through genetic algorithm
CN104765916A (en) * 2015-03-31 2015-07-08 西南交通大学 Dynamics performance parameter optimizing method of high-speed train
CN104834808A (en) * 2015-04-07 2015-08-12 青岛科技大学 Back propagation (BP) neural network based method for predicting service life of rubber absorber
CN105373830A (en) * 2015-12-11 2016-03-02 中国科学院上海高等研究院 Prediction method and system for error back propagation neural network and server
CN105631539A (en) * 2015-12-25 2016-06-01 上海建坤信息技术有限责任公司 Intelligent building energy consumption prediction method based on support vector machine
CN106161138A (en) * 2016-06-17 2016-11-23 贵州电网有限责任公司贵阳供电局 A kind of intelligence automatic gauge method and device
CN106874581A (en) * 2016-12-30 2017-06-20 浙江大学 A kind of energy consumption of air conditioning system in buildings Forecasting Methodology based on BP neural network model
CN106951611A (en) * 2017-03-07 2017-07-14 哈尔滨工业大学 A kind of severe cold area energy-saving design in construction optimization method based on user's behavior
CN106991504A (en) * 2017-05-09 2017-07-28 南京工业大学 Building energy consumption Forecasting Methodology, system and building based on metering separate time series

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史丽荣: "日光温室环境建模及控制策略的研究", 《中国优秀硕士学位论文全文数据库 农业科技辑》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019214309A1 (en) * 2018-05-10 2019-11-14 阿里巴巴集团控股有限公司 Model test method and device
CN112232476B (en) * 2018-05-10 2024-04-16 创新先进技术有限公司 Method and device for updating test sample set
US11176418B2 (en) 2018-05-10 2021-11-16 Advanced New Technologies Co., Ltd. Model test methods and apparatuses
CN112232476A (en) * 2018-05-10 2021-01-15 创新先进技术有限公司 Method and device for updating test sample set
CN108615071B (en) * 2018-05-10 2020-11-24 创新先进技术有限公司 Model testing method and device
CN108615071A (en) * 2018-05-10 2018-10-02 阿里巴巴集团控股有限公司 The method and device of model measurement
TWI698808B (en) * 2018-05-10 2020-07-11 香港商阿里巴巴集團服務有限公司 Model testing method and device
CN108764568B (en) * 2018-05-28 2020-10-23 哈尔滨工业大学 Data prediction model tuning method and device based on LSTM network
CN108764568A (en) * 2018-05-28 2018-11-06 哈尔滨工业大学 A kind of data prediction model tuning method and device based on LSTM networks
CN109063903A (en) * 2018-07-19 2018-12-21 山东建筑大学 A kind of building energy consumption prediction technique and system based on deeply study
CN109063903B (en) * 2018-07-19 2021-04-09 山东建筑大学 Building energy consumption prediction method and system based on deep reinforcement learning
CN109325631A (en) * 2018-10-15 2019-02-12 华中科技大学 Electric car charging load forecasting method and system based on data mining
CN111062876B (en) * 2018-10-17 2023-08-08 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111062876A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111179108A (en) * 2018-11-12 2020-05-19 珠海格力电器股份有限公司 Method and device for predicting power consumption
CN109685252B (en) * 2018-11-30 2023-04-07 西安工程大学 Building energy consumption prediction method based on cyclic neural network and multi-task learning model
CN109685252A (en) * 2018-11-30 2019-04-26 西安工程大学 Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model
CN109726936A (en) * 2019-01-24 2019-05-07 辽宁工业大学 A kind of monitoring method rectified a deviation for tilting ancient masonry pagoda
CN109726936B (en) * 2019-01-24 2020-06-30 辽宁工业大学 Monitoring method for deviation correction of inclined masonry ancient tower
CN110032780A (en) * 2019-02-01 2019-07-19 浙江中控软件技术有限公司 Commercial plant energy consumption benchmark value calculating method and system based on machine learning
CN112183166A (en) * 2019-07-04 2021-01-05 北京地平线机器人技术研发有限公司 Method and device for determining training sample and electronic equipment
CN111160598A (en) * 2019-11-13 2020-05-15 浙江中控技术股份有限公司 Energy prediction and energy consumption control method and system based on dynamic energy consumption benchmark
CN111221880A (en) * 2020-04-23 2020-06-02 北京瑞莱智慧科技有限公司 Feature combination method, device, medium, and electronic apparatus
CN111859500B (en) * 2020-06-24 2023-10-10 广州大学 Prediction method and device for bridge deck elevation of rigid frame bridge
CN111859500A (en) * 2020-06-24 2020-10-30 广州大学 Method and device for predicting bridge deck elevation of rigid frame bridge
CN112462708A (en) * 2020-11-19 2021-03-09 南京河海南自水电自动化有限公司 Remote diagnosis and optimized scheduling method and system for pump station
CN113552855A (en) * 2021-07-23 2021-10-26 重庆英科铸数网络科技有限公司 Industrial equipment dynamic threshold setting method and device, electronic equipment and storage medium
CN116204566A (en) * 2023-04-28 2023-06-02 深圳市欣冠精密技术有限公司 Digital factory monitoring big data processing system
CN117077854A (en) * 2023-08-15 2023-11-17 广州视声智能科技有限公司 Building energy consumption monitoring method and system based on sensor network
CN117077854B (en) * 2023-08-15 2024-04-16 广州视声智能科技有限公司 Building energy consumption monitoring method and system based on sensor network

Also Published As

Publication number Publication date
CN107590565B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN107590565A (en) A kind of method and device for building building energy consumption forecast model
CN107169628B (en) Power distribution network reliability assessment method based on big data mutual information attribute reduction
CN103226741B (en) Public supply mains tube explosion prediction method
CN109523021B (en) Dynamic network structure prediction method based on long-time and short-time memory network
CN101480143B (en) Method for predicating single yield of crops in irrigated area
CN106453293A (en) Network security situation prediction method based on improved BPNN (back propagation neural network)
CN106874688A (en) Intelligent lead compound based on convolutional neural networks finds method
CN104572449A (en) Automatic test method based on case library
CN108596274A (en) Image classification method based on convolutional neural networks
CN104536881A (en) Public testing error report priority sorting method based on natural language analysis
CN107886160B (en) BP neural network interval water demand prediction method
CN104539601B (en) Dynamic network attack process analysis method for reliability and system
CN106202380A (en) The construction method of a kind of corpus of classifying, system and there is the server of this system
CN112330050A (en) Power system load prediction method considering multiple features based on double-layer XGboost
CN111178585A (en) Fault reporting amount prediction method based on multi-algorithm model fusion
CN103226728B (en) High density polyethylene polymerization cascade course of reaction Intelligent Measurement and yield optimization method
CN109754122A (en) A kind of Numerical Predicting Method of the BP neural network based on random forest feature extraction
CN103995873A (en) Data mining method and data mining system
CN103544539A (en) Method for predicting variables of users on basis of artificial neural networks and D-S (Dempster-Shafer) evidence theory
CN106096723A (en) A kind of based on hybrid neural networks algorithm for complex industrial properties of product appraisal procedure
CN109787821B (en) Intelligent prediction method for large-scale mobile client traffic consumption
CN106355273A (en) Predication system and predication method for after-stretching performance of nuclear material radiation based on extreme learning machine
CN101206727A (en) Data processing apparatus, data processing method data processing program and computer readable medium
CN112232570A (en) Forward active total electric quantity prediction method and device and readable storage medium
CN109146335B (en) Method for judging consistency of system transformation ratio and actual transformation ratio of 10kV line electric energy meter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant