CN114742278A - Building energy consumption prediction method and system based on improved LSTM - Google Patents
Building energy consumption prediction method and system based on improved LSTM Download PDFInfo
- Publication number
- CN114742278A CN114742278A CN202210265853.2A CN202210265853A CN114742278A CN 114742278 A CN114742278 A CN 114742278A CN 202210265853 A CN202210265853 A CN 202210265853A CN 114742278 A CN114742278 A CN 114742278A
- Authority
- CN
- China
- Prior art keywords
- lstm
- neural network
- energy consumption
- optimal
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005265 energy consumption Methods 0.000 title claims abstract description 74
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013528 artificial neural network Methods 0.000 claims abstract description 57
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 27
- 238000005457 optimization Methods 0.000 claims abstract description 25
- 230000008569 process Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 40
- 238000012549 training Methods 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 14
- 238000012360 testing method Methods 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 11
- 230000005855 radiation Effects 0.000 claims description 11
- 238000004378 air conditioning Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000000717 retained effect Effects 0.000 claims description 7
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 claims description 4
- 230000000295 complement effect Effects 0.000 claims description 3
- 238000002474 experimental method Methods 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 210000004027 cell Anatomy 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010219 correlation analysis Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000010845 search algorithm Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 230000010355 oscillation Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 239000005431 greenhouse gas Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/08—Construction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Primary Health Care (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Air Conditioning Control Device (AREA)
Abstract
The invention discloses a building energy consumption prediction method and a system based on improved LSTM, wherein the method comprises the following processes: obtaining optimal parameters corresponding to the LSTM neural network; introducing the optimal parameters into an LSTM variant neural network, optimizing the hyper-parameters in the LSTM variant neural network by using a random gradient optimization algorithm based on weight attenuation to obtain the optimal hyper-parameters of the LSTM variant neural network, and taking the LSTM variant neural network corresponding to the optimal hyper-parameters as an optimal LSTM prediction model; and processing the collected data influencing the building load by using the optimal LSTM prediction model, predicting the load data of the building at the specified time, and realizing the prediction of the building energy consumption. The method has higher prediction precision and better stability, and is more suitable for short-term energy consumption prediction of commercial buildings.
Description
Technical Field
The invention belongs to the technical field of energy consumption prediction, and particularly relates to a building energy consumption prediction method and system based on improved LSTM.
Background
With the acceleration of urbanization pace, the number of urban buildings is increased, and the proportion of building energy consumption in the whole energy consumption system is larger and larger. The global building energy consumption exceeds the industrial and transportation industries, accounts for 46 percent of the total energy consumption, and the building carbon emission accounts for as high as 36 percent. People spend 90% of the time in buildings, and the continuous pursuit of heat comfort by people causes the increase of building energy consumption, greenhouse gases and the like, so that the energy demand management of the building industry with huge energy consumption becomes an important research field.
In all types of buildings, commercial buildings consume 30% more energy than residential buildings, mainly due to their large area, large traffic, long working time, and high lighting and air conditioning requirements. According to survey and analysis, the market public building energy consumption is highest in commercial buildings, and the average energy consumption per unit building area is 3.521 GJ/(m)2A) about 3 times that of office buildings and 2 times that of hotel buildings. In view of the fact that energy consumption prediction is the key to improving energy utilization efficiency and reducing peak power demand, commercial building energy consumption prediction is a global and widely concerned problem. However, outdoor temperature and humidity, solar radiation, personnel movement and the like can cause energy consumption change in the building, and meanwhile, due to the nonlinear and fluctuating characteristics of operation of most of equipment in the building, accurate prediction of energy consumption data becomes a huge challenge. In recent years, the wide application of high-precision sensors provides important support for predicting building energy consumption.
In recent years, energy consumption prediction methods are mainly classified into two types: (1) physical models (including EnergyPlus, eQuest, Ecotecet, etc.); (2) data-driven models (including Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs), Decision Trees (DTs), Regression Models (RMs), etc.). The data driving method has the advantages of universality, flexibility, high precision and the like, and is widely adopted in building energy consumption prediction in recent years.
In terms of short-term load prediction: kim Y demonstrated that the ANN model had the best prediction accuracy, but was poorly interpretable, by predicting the 1 hour peak load in the korean seoul agency building. ANN has proven to be a robust method for efficiently predicting the power consumption of domestic units. The CNN is used to extract nonlinear load characteristics and nonlinear load temperature characteristics from the constructed hourly load cube, using the extracted characteristics as input to Support Vector Regression (SVR) for short-term load prediction. SVMs find the most active application in predicting short-term power loads. Mohandes M firstly applies the same data and weight as those of autoregression to SVM for power load prediction, and the result proves that the performance is better, but the finding that the SVM is more suitable for long-term load prediction is proved.
Regression prediction is directed to the dependency inside the data set, and time series prediction is directed to the dependency of the data set with time. In recent years, when studying the nonlinear time series problem, many scholars have begun to combine the traditional time series prediction model with the Recurrent Neural Network (RNN) and achieve good performance. From the neuron structure, nodes of the RNN hidden layer are connected together, previous information can be memorized, the output of a node behind is influenced by the previous information, although the training of sequence data can be well solved theoretically, the problems of gradient explosion and disappearance are often encountered, and the problems are particularly serious when the sequence is long. In order to solve the RNN deficiency, a variant long-short term memory (LSTM) neural network of RNN was proposed in 1997 by Hochreiter & Schmidhuber, and the LSTM network was innovative in that three control gates (an input gate, an output gate, and a forgetting gate) were introduced, the memory function of the model was realized by adjusting the opening and closing of the valves, and previous data were selectively retained and forgotten, so as to realize the influence of the early sequence on the final result. LSTM is widely used for text generation, machine translation, speech recognition, and gesture prediction due to its excellent ability to memorize time series. However, Klaus Greff et al found that setting of LSTM-related hyper-parameters has a significant influence on prediction accuracy, so it becomes a challenging task to select a suitable algorithm to improve LSTM prediction accuracy according to different application scenarios. Gradient descent methods and back propagation algorithms are commonly used to update the LSTM superparameters. However, the performance of the gradient descent depends on the learning rate, weight decay, momentum, etc. superparameters.
The adaptive moment estimation (Adam) is an effective stochastic gradient optimization algorithm, comprehensively considers first moment estimation and second moment estimation, can update parameters through the oscillation condition of historical gradients and the real historical gradients after filtering oscillation, limits the updated step length within a certain range according to the initial learning rate, is not influenced by gradient expansion transformation, and has good interpretability on hyper-parameters.
However, the performance of a single model is limited, and nowadays a large number of scholars use a hybrid model to improve the prediction accuracy of energy consumption. However, the current hybrid models are used to accomplish specific tasks, rather than adjusting the structure of the LSTM itself to improve its prediction accuracy. Therefore, research to solve practical engineering problems by enhancing the performance of the LSTM algorithm itself is still in the blank phase. In addition, Adam's stability is affected by weight decay.
Disclosure of Invention
Based on the problems, the invention provides a building energy consumption prediction method (DwddAdam-LSTM) and a system based on an improved LSTM in order to better realize energy consumption prediction.
The technical scheme adopted by the invention is as follows:
a building energy consumption prediction method based on improved LSTM comprises the following processes:
obtaining optimal parameters corresponding to the LSTM neural network;
introducing the optimal parameters into an LSTM variant neural network, optimizing the hyperparameters in the LSTM variant neural network by using a weight attenuation-based random gradient optimization algorithm to obtain the optimal hyperparameters of the LSTM variant neural network, and taking the LSTM variant neural network corresponding to the optimal hyperparameters as an optimal LSTM prediction model;
and processing the collected data influencing the building load by using the optimal LSTM prediction model, predicting the load data of the building at the specified time, and realizing the prediction of the building energy consumption.
Preferably, the obtaining process of the corresponding optimal parameter of the LSTM neural network includes:
converting data influencing building load in a preset time period before building energy consumption prediction into a three-dimensional array, taking the three-dimensional array as original data of a predicted later time step, determining an LSTM neural network batch b, a hidden layer number d and a hidden unit number u by adopting a grid search method, wherein the batch b, the hidden layer number d and the hidden unit number u form a three-dimensional search space, stepb, stepd and stepu respectively correspond to grid step lengths searched by the batch b, the hidden layer number d and the hidden unit number u, training and testing are carried out by using a training data set in a pre-established historical database, training is carried out for preset times in a parameter value range, and an average absolute error MAE value is taken as a target function of the grid search method to obtain an optimal parameter corresponding to the LSTM neural network.
Preferably, the corresponding optimal parameters of the LSTM neural network include batch number, hidden layer number, and hidden unit number, where the batch range is 13-18, the hidden layer number is 1-3, and the hidden unit number is 20-80.
Preferably, the LSTM variant neural network model is obtained by improving the gate structure of the LSTM neural network;
the process of improving the gate structure of the LSTM neural network includes: introducing the cell state of the previous moment when calculating forgetting data, and adding peephole connection into a forgetting gate system; the forgetting gate is connected with the input gate, new information is introduced when old information is forgotten, and the retained old information and the introduced new information are set to be complementary without changing the input activation function.
Preferably:
the forgetting gate calculation formula is as follows:
ft=sigmoid(Wfxt,ht-1,Ct-1]+bf)
the input gate calculation formula is as follows:
it=sigmoid(Wi|xt,ht-1,(1-ft)]+bi)
in the formula, Ct-1Is t-1Step-by-step cell State, 1-ftIs the old information retained, WfWeight matrix for forgetting gate, WiAnd WCAs a weight matrix of input gates, vector bfBias vector for forgetting gate, biAnd bcAs an offset vector of the input gate, ht-1Is the output value, x, of the t-1 time step hidden statetIs t time step input information, ftIs left behind gate output, itIs the output of the input gate or gates,is a candidate value for time t, sigmoid () and tanh () are activation functions, respectively.
Preferably, the random gradient optimization algorithm based on weight attenuation is obtained by introducing a weight attenuation term during parameter updating through deviation correction after gradient moment estimation in an Adam optimization algorithm.
Preferably, the parameter updating formula of the weight attenuation-based random gradient optimization algorithm is as follows:
θ=[W,b]
wherein W comprises a weight matrix W of a forgetting gatefWeight matrix W of input gatesiWeight matrix W of input gatesCAnd the weight matrix Wo, b of the output gate comprises a bias matrix b of the forgetting gatefBias matrix b of input gatesiInput gate bias matrix bcAnd an offset matrix b of output gateso,θtUpdated parameters for t time steps, θt-1Parameter, m, updated for t-1 time steptIs a first moment vector of the first order,is the deviation correction of the second moment vector, epsilon is the minimum value, eta is the learning rate, omegat-1Is the weighted decay rate.
Preferably, the hyper-parameters optimized for the LSTM variant neural network using the weight-decay-based stochastic gradient optimization algorithm include a weight matrix of forgetting gates, a weight matrix of input gates, a weight matrix of output gates, a bias matrix of forgetting gates, a bias matrix of input gates, and a bias matrix of output gates (i.e., specifically including Wf、Wi、WC、Wo、bf、biBo and bc) (ii) a When optimizing, firstly setting the first estimated exponential decay rate beta in the random gradient optimization algorithm based on weight decay1Second estimated exponential decay rate beta2Learning rate eta, gradient g of initialization parameter vector t time steptA first moment vector mtCorrection of deviations of the second moment vector vt and the first moment vectorCorrection of deviations of the second moment vectorAnd updating the parameter thetatThen, training is carried out by using a training data set in a pre-established database, and a group of hyper-parameters which enable the loss function f (theta) to be minimum are found, wherein the hyper-parameters at the moment are the optimal hyper-parameters.
Preferably, the data influencing the building load comprise temperature, humidity, solar radiation, wind speed and air conditioning load actual data.
The invention also provides a building energy consumption prediction system based on the improved LSTM, which comprises the following components:
an optimal parameter acquisition unit: the method comprises the steps of obtaining optimal parameters corresponding to the LSTM neural network;
data is divided into units: the method comprises the steps of introducing optimal parameters into an LSTM variant neural network, optimizing hyper-parameters in the LSTM variant neural network by using a random gradient optimization algorithm based on weight attenuation to obtain optimal hyper-parameters of the LSTM variant neural network, and taking the LSTM variant neural network corresponding to the optimal hyper-parameters as an optimal LSTM prediction model;
a data application unit: the method is used for processing the collected data influencing the building load by using the optimal LSTM prediction model, predicting the load data of the building at the appointed time and realizing the prediction of the building energy consumption.
The invention has the following beneficial effects
Most studies use SVR and Adam-LSTM neural network models to predict the hourly energy consumption of air conditioning systems. However, due to the limitations of the predictive model itself, the prediction results are not satisfactory. The method not only provides the self-adaptive learning rate for the hyper-parameters, but also adds the weight attenuation item to update the loss parameters, thereby improving the convergence rate. Experimental results show that the method can fully and effectively memorize historical data compared with an SVR network model, has stronger stability than Adam-LSTM, and has more accurate prediction precision. Predicted MSE values for time-to-time energy consumption were reduced by 83% and 78% compared to SVR and LSTM, and 66%, 71% and 30% compared to SCA-LSTM, RMSprop-LSTM and Adam-LSTM, respectively. Therefore, the method has higher prediction precision and better stability, and is more suitable for short-term energy consumption prediction of commercial buildings.
Drawings
FIG. 1 is a schematic diagram of an improved LSTM neural network according to the present invention;
FIG. 2 is a flow chart of the AdamW-LSTM network model of the present invention;
FIG. 3(a) is a graph showing the correlation between the load and the temperature of the present invention at a hysteresis cycle of 6 hours;
FIG. 3(b) is a graph showing the correlation between load and humidity in the case of the present invention at a hysteresis cycle of 6 hours;
FIG. 3(c) is a graph of the correlation of the present invention load with solar radiation at a lag period of 6 hours;
FIG. 3(d) is a graph showing the correlation between the load and the wind speed in the hysteresis cycle of 6 hours according to the present invention;
FIG. 4 is a diagram illustrating loss values of different numbers of concealment layers during iteration according to the present invention;
FIG. 5 is a schematic diagram of the variation of loss values of 1 hidden layer after 100 iterations according to the present invention;
fig. 6 is a schematic diagram of the change of loss values of 3 hidden layers after 100 iterations according to the present invention.
Detailed Description
The invention is further described below with reference to the figures and examples.
The invention relates to a building energy consumption prediction method based on improved LSTM, which comprises the following steps:
step 1: construction of energy consumption prediction model (DwdAdam-LSTM)
The energy consumption prediction model comprises four layers of structures which are respectively as follows: the system comprises a data acquisition layer, a data preprocessing layer, a data analysis layer and a data application layer.
The data acquisition layer adopts automation equipment such as a high-precision sensor and a controller to acquire, summarize and store temperature, humidity, solar radiation and wind speed which affect building loads.
The data preprocessing layer processes the acquired original data. Firstly, original data can be damaged or inaccurate due to extreme weather of acquisition equipment; meanwhile, bad values or data loss is caused by data packet loss and other reasons in the transmission process, the bad values are removed, and the bad values and the lost data are processed through interpolation, an average filter and other means. And the second step is to observe the internal rule of each group of data by drawing a statistical chart, and simultaneously, research the correlation between the energy consumption and each influence factor by adopting a correlation analysis method to carry out proper screening. In addition, the data is subjected to minimum-maximum normalization processing, so that the data is mapped between [0 and 1], the problems caused by different data dimensions are solved, and the convergence speed of the model can be increased. And finally, introducing a cross validation idea in data processing, and dividing the energy consumption data into a training set and a test set.
And the data analysis layer optimizes the LSTM variant neural network by using an improved Dwdadam optimization algorithm to obtain an optimal value of the hyperparameter so as to improve the prediction accuracy. Training and testing the existing energy consumption data, calculating the loss (MAE) of the test set, and finishing iteration when the loss is minimum.
And the data application layer predicts the load data of the large commercial building at a specified time by using the trained model.
Step 2: constructing LSTM variant neural networks
The invention improves the gate structure of the LSTM, introduces the cell state at the previous moment when calculating forgetting data, adds peephole connection into a forgetting gate system, and can more accurately learn the information required to be reserved. The forgetting gate is connected with the input gate, new information is introduced when old information is forgotten, and the retained old information and the introduced new information are set to be complementary without changing the input activation function. The improved LSTM variant neural network structural model is shown in figure 1.
The calculation of the forgetting gate and the input gate after the LSTM network variant is shown as formulas (1) to (3):
A) forget gate calculation like formula
ft=sigmoid(Wf[xt,ht-1,Ct-1]+bf) (1)
B) Input gate calculation such as formula
it=sigmoid(Wi[xt,ht-1,(1-ft)]+bi)
In the formula, Ct-1Is the t-1 time step cell status, 1-ftIs the old information retained, WfWeight matrix for forgetting gate, WiAnd WCAs a weight matrix of input gates, vector bfBias vector for forgetting gate, biAnd bcIs the offset vector of the input gate, ht-1Is the output value, x, of the t-1 time step hidden statetIs t time step input information, ftIs left behind gate output, itIs the output of the input gate or gates,is a candidate value for time t, sigmoid () and tanh () are activation functions, respectively.
And step 3: dwdadam optimization algorithm
The traditional Adam optimizer carries out first moment estimation and second moment estimation on the gradient, calculates the individual self-adaptive learning rate of different parameters through deviation correction, and finds the optimal value of the hyper-parameter. However, Adam was found to be less stable than SGD in some tasks, the main reason for this being weight decay.
According to the method, after the gradient moment is estimated, through deviation correction, a weight attenuation item is introduced during parameter updating to obtain the random gradient optimizer DwddAdam based on weight attenuation, so that the updating of the individual self-adaptive learning rate is decoupled from the weight attenuation, the hyper-parameters are not dependent on each other, and the independent optimization of the hyper-parameters is realized. The calculation process of the DwdAdam optimization algorithm is as follows, equations (4) - (5):
θ=[W,b] (5)
wherein W comprises a weight matrix W of a forgetting gatefWeight matrix W of input gatesiWeight matrix W of input gatesCAnd a weight matrix W of output gatesoB bias matrix b containing forget gatefBias matrix b of input gatesiBias matrix b of input gatescAnd an offset matrix b of output gateso,θtUpdated parameters for t time steps, θt-1Parameter, m, updated for t-1 time steptIs a first moment vector of a first order,is the deviation correction of the second moment vector, epsilon is the minimum value, eta is the learning rate, omegat-1Is the weight decay rate.
And 4, step 4: building energy consumption prediction
By analyzing the data, the short-term load prediction has certain dependence on the load and weather factors of the previous days, so that the optimal hyperparameter of the LSTM network is obtained by using a Dwdadam optimization algorithm, and the most efficient result is obtained in the shortest time.
The whole work of building energy consumption prediction can be divided into 4 parts, namely (1) data cleaning; (2) optimizing an LSTM structure; (3) predicting a short-term energy consumption value; (4) evaluation of DwddAm-LSTM model. The algorithm flow is shown in fig. 2.
Step 4-1: data cleansing
The proposed model is used to predict the energy consumption requirements of a building, using real data collected within the building to prove the superiority of the established model. In order to further improve the efficiency of parameter updating and the accuracy of the model for predicting the energy consumption data, the abnormal and missing data in the original data set are interpolated, the main factors influencing the energy consumption are screened out by adopting a correlation analysis method, the data normalization processing is in the range of [0,1], and the sample set is divided into a training set and a test set as shown in fig. 3(a) -3 (d).
Step 4-2: optimizing the structure of an LSTM neural network
Before load prediction using LSTM, the load, temperature, humidity, solar radiation data for the first 6 hours were converted into a three-dimensional array as raw data for predicting the later time step. And determining the LSTM batch b, the number d of hidden layers and the number u of hidden units by adopting a grid search method. b. d and u form a three-dimensional search space, stepb, stepd and stepu respectively correspond to the grid step length searched by each parameter, a training data set in a historical database is used for training and testing, 5 times of training are carried out in the value range of the parameters, and the average absolute error MAE value is used as the target function of the grid search algorithm to obtain the optimal parameters corresponding to the LSTM neural network. The ranges of values of the LSTM network parameter variables are shown in table 1.
TABLE 1
Number of batches | [13-18] |
Number of hidden layers | [1-3] |
Number of hidden layer units | [20-80] |
Optimal parameters are introduced into the LSTM variant neural network. Then optimizing the hyperparameter W in the LSTM by using DwdAdam optimization algorithmf、Wi、WC、Wo、bf、bi、boAnd bcFirst, the parameter β in DwdAdam is set1、β2Eta and initialization parameter vector gt、mt、vt、θtThen, training is performed by using a training data set in the database, and a group of hyper-parameters which enable the loss function f (theta) to be minimum is found, so that an optimal LSTM prediction model is obtained.
Step 4-3: short term energy consumption prediction
And (4) testing the optimal LSTM prediction model obtained in the step (4-2) by using a test set of a database to obtain a short-term energy consumption prediction value.
Step 4-4: evaluation model
And evaluating the accuracy of the energy consumption prediction model by using CV-RMSE, MSE, MAE and MAPE according to the energy consumption prediction value.
Examples
The embodiment proves the excellent prediction performance of the used model by researching the energy consumption data of a large commercial building.
Description of the experiments
Hardware equipment such as a temperature and humidity sensor, a solar radiation sensor, a micro wind speed sensor, an intelligent electric meter, an intelligent gateway, a DDC controller, a data concentrator, an air switch, a 24V switching power supply and the like are used in the experiment. The DwdAdam-LSTM energy consumption prediction model used was implemented using a Python3.8 language environment under the Windows10 operating system based on the AMD R7 processor. The whole experiment process is divided into four stages, (1) data acquisition; (2) preprocessing data; (3) setting a DwdAdam-LSTM model; (4) and (6) performance evaluation.
1.1 data set description
Data was collected from a large commercial building with a mall height of 40.6 meters and a building area of about 25 thousand square meters, with an air conditioner footprint of about 18.76 thousand square meters. The data set comprises actual data of temperature, humidity, solar radiation, wind speed and air conditioning load from 8 am to 22 pm from 6/2/2021/8/12/2021/12/morning, and the actual data is recorded once per hour, and 1080 groups of data are counted.
1.2 data preprocessing
The data preprocessing is an indispensable step before analyzing the data, so that the model is prevented from being unscented and found due to missing values, abnormal values and the like, meanwhile, the compatible format of the learning model caused by dimension difference among the data is also considered, and the prediction precision is prevented from being reduced by adopting the minimum and maximum normalization processing.
And (3) researching the correlation between the energy consumption and each influence factor by observing the internal rule of each group of data and adopting a correlation analysis method, and screening out factors with high correlation. Training and testing samples are divided for the processed data set, so that the complexity of the model is reduced, and the prediction precision of the model is improved. The experimental data sets totaled 1080 sets with test samples accounting for 10% of the total data set. 3(a) -3 (d) show the correlation of the load and other influencing factors in a lag period of 6 hours, and the correlation of the load with the temperature, the humidity and the solar radiation is obviously found to exceed the upper and lower confidence limits, wherein the correlation coefficient of the load with the solar radiation is the highest, but the correlation of the load with the wind speed is the lowest, so that the wind speed variable is eliminated.
1.3 DwdAdam-LSTM model set-up
The structure parameters of the LSTM are obtained by a grid search algorithm, and the specific parameter settings are shown in table 2.
TABLE 2
Simulation experiments were carried out in a tensiflow-based Keras deep learning library in Python.
1.4 model evaluation index
In order to effectively evaluate the model, four indexes of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Mean Square Error (MSE) and mean square error coefficient of variation (CV-RMSE) are used as the standard for evaluating the performance of the energy consumption prediction model, the MAE is used as a loss function, and the smaller the value of the loss function is, the better the performance of the prediction model is.
2 results and discussion
Various experimental models are compared and analyzed, and the advantages of the Dwdadam-LSTM in energy consumption prediction accuracy compared with the comparative experimental model are proved. In order to reduce the accidental error of the experiment, the performance evaluation value is obtained by averaging after 20 times of running.
2.1 LSTM optimal parameters
The first step of the experiment is to find the best LSTM hidden layer number, batch and hidden unit number by using a grid search algorithm. An LSTM neural network model with hidden layer number [1, 3], batch range [5, 18] and hidden unit number range [30, 80] was tested.
The experimental result shows that the optimal batch is 15, the main reason is that the opening time of a market is 8-22 points per day, the energy consumption prediction aims at the air conditioning load of a commercial area, and the change of the air conditioning load is related to weather and time factors and becomes regular change. The optimum hidden unit is 50, and the number of hidden layers is 3. Experiments again verify the magnitude of the loss values of different hidden layers under the optimal hidden unit and batch, and fig. 4 shows the influence of different numbers of hidden layers through 5 iterations in the form of mean absolute error MAE. It is seen from fig. 4 that the loss of 2 hidden layers is slightly higher than that of the other two, wherein the loss of 3 hidden layers is the smallest in three times, but the training loss of 3 hidden layers is introduced into the test data set to iterate 100 times to find as in fig. 5 and fig. 6, although the training loss of 3 hidden layers is smaller than that of 1 hidden layer, the tested lost hidden layer of 3 is inferior to that of 1 hidden layer, which indicates that the overfitting phenomenon is generated, and therefore the optimal number of hidden layers is 1.
4.2.2 comparison of the method of the invention with the prediction model of the mainstream
The LSTM and SVR neural networks have energy consumption prediction capability, and an optimization algorithm and a data driving model are mixed for research when multi-source heterogeneous data are processed, so that the precision and the stability of an energy consumption prediction model can be improved.
The predicted value obtained by applying the LSTM neural network and the SVR network is basically consistent with the actual value, but the error of the SVR network in the week is obviously higher than that of the LSTM neural network, and the range of the error is relatively large. Analysis finds that the influence of the energy consumption of the air conditioning system and external environmental factors is large, the fluctuation is strong, and in addition, a plurality of input parameters such as temperature, solar radiation and the like are also reasons for the performance reduction of the SVR. However, LSTM is a recurrent neural network that retains both the non-linear mapping capabilities of SVR neural networks and is suitable for processing trend data, since historical data is long and forgetting gates retain useful data.
After different optimization algorithms are applied to optimize the LSTM neural network, the prediction effect is remarkably improved, but the integral absolute error of SCA-LSTM is found to be large, the extreme difference between errors of RMSprop-LSTM and Adam-LSTM is large, the stability is poor, DwdAdam-LSTM is stable in performance, and the error fluctuation range is within 15%.
Calculating the quality performance indexes of different prediction models under the optimal architecture, and experiments show that the single data driving model LSTM has higher prediction precision than SVR, and the MSE value of the time-by-time energy consumption prediction is reduced by 23%; however, a single data-driven model is far inferior to a hybrid model in terms of convergence problem and model accuracy, and the performance of the hybrid model is also different among various performance indexes, for example, the SCA-LSTM is inferior to the RMSprop-LSTM in MAE and MAPE performance, but the MSE is good. SCA-LSTM in the mixed model has the worst performance, and CV-RMSE, MAE, MAPE and MSE of the SCA-LSTM are reduced by 19%, 8.6%, 10.7% and 35.7% compared with LSTM; compared with the LSTM, CV-RMSE, MAE, MAPE and MSE of the proposed model Dwdadam-LSTM are respectively reduced by 58%, 81%, 79% and 78%, and compared with Adam-LSTM with inferior performance, CV-RMSE, MAE, MAPE and MSE are respectively reduced by 41%, 53%, 52% and 30%, and the comparison results of quality performance indexes of energy consumption prediction of different prediction models are shown in Table 3.
TABLE 3
It can be seen that the energy consumption prediction model based on DwdAdam-LSTM is significantly improved in various performance evaluation indexes compared with the comparative model.
Claims (10)
1. A building energy consumption prediction method based on improved LSTM is characterized by comprising the following processes:
obtaining optimal parameters corresponding to the LSTM neural network;
introducing the optimal parameters into an LSTM variant neural network, optimizing the hyper-parameters in the LSTM variant neural network by using a random gradient optimization algorithm based on weight attenuation to obtain the optimal hyper-parameters of the LSTM variant neural network, and taking the LSTM variant neural network corresponding to the optimal hyper-parameters as an optimal LSTM prediction model;
and processing the collected data influencing the building load by using the optimal LSTM prediction model, predicting the load data of the building at the specified time, and realizing the prediction of the building energy consumption.
2. The improved LSTM-based building energy consumption prediction method of claim 1, wherein the LSTM neural network corresponding to the optimal parameters obtaining process comprises:
converting data influencing building load in a preset time period before building energy consumption prediction into a three-dimensional array, taking the three-dimensional array as original data of a predicted later time step, determining an LSTM neural network batch b, a hidden layer number d and a hidden unit number u by adopting a grid search method, wherein the batch b, the hidden layer number d and the hidden unit number u form a three-dimensional search space, stepb, stepd and stepu respectively correspond to grid step lengths searched by the batch b, the hidden layer number d and the hidden unit number u, training and testing are carried out by using a training data set in a pre-established historical database, training is carried out for preset times in a parameter value range, and an average absolute error MAE value is taken as a target function of the grid search method to obtain an optimal parameter corresponding to the LSTM neural network.
3. The improved LSTM-based building energy consumption prediction method of claim 2, wherein the LSTM neural network corresponding optimal parameters comprise batch number, hidden layer number and hidden unit number, wherein the batch range is 13-18, the hidden layer number is 1-3, and the hidden unit number is 20-80.
4. The improved LSTM-based building energy consumption prediction method of claim 1, wherein the LSTM variant neural network model is obtained by improving the gate structure of the LSTM neural network;
the process of improving the gate structure of the LSTM neural network includes: introducing the cell state of the previous moment when calculating forgetting data, and adding peephole connection into a forgetting gate system; the forgetting gate is connected with the input gate, new information is introduced when old information is forgotten, and the retained old information and the introduced new information are set to be complementary without changing the input activation function.
5. The improved LSTM-based building energy consumption prediction method of claim 4, wherein:
the forgetting gate calculation formula is as follows:
ft=sigmoid(Wf[xt,ht-1,Ct-1]+bf)
the input gate calculation formula is as follows:
it=sigmoid(Wi[xt,ht-1,(1-ft)]+bi)
in the formula, Ct-1Is the t-1 time step cell status, 1-ftIs the old information retained, WfTo forgetWeight matrix of the gate, WiAnd WCAs a weight matrix of input gates, vector bfBias vector for forgetting gate, biAnd bcIs the offset vector of the input gate, ht-1Is the output value, x, of the t-1 time step hidden statetIs t time step input information, ftIs left-behind gate output, itIs the output of the input gate or gates,is a candidate value for time t, sigmoid () and tanh () are activation functions, respectively.
6. The improved LSTM-based building energy consumption prediction method according to claim 1, wherein the weight attenuation-based stochastic gradient optimization algorithm is obtained by introducing a weight attenuation term during parameter update after gradient moment estimation in an Adam optimization algorithm through bias correction.
7. The improved LSTM-based building energy consumption prediction method according to claim 6, wherein the weight attenuation-based stochastic gradient optimization algorithm has the following parameter update formula:
θ=[W,b]
wherein W includes a weight matrix of a forgetting gate, a weight matrix of an input gate, and a weight matrix of an output gate, b includes an offset matrix of a forgetting gate, an offset matrix of an input gate, and an offset matrix of an output gate, and θtUpdated parameters for t time steps, θt-1Parameter, m, updated for t-1 time steptIs a first moment vector of a first order,is the deviation correction of the second moment vector, epsilon is the minimum value,eta is learning rate, omegat-1Is the weight decay rate.
8. The improved LSTM-based building energy consumption prediction method of claim 1, wherein the hyper-parameters optimized for the LSTM variant neural network using the weight attenuation-based stochastic gradient optimization algorithm comprise weight matrix of forgetting gates, weight matrix of input gates, weight matrix of output gates, bias matrix of forgetting gates, bias matrix of input gates, bias matrix of output gates; when optimization is carried out, firstly, the exponential decay rate of the first estimation, the exponential decay rate of the second estimation, the learning rate, the gradient of the initialization parameter vector t in a time step, the first moment vector, the second moment vector, the deviation correction of the first moment vector, the deviation correction of the second moment vector and the updating parameter are set in a random gradient optimization algorithm based on weight decay, then, training is carried out by using a training data set in a pre-established database, a group of hyper-parameters which enable the loss function f (theta) to be minimum are found, and the hyper-parameters at the moment are the optimal hyper-parameters.
9. The improved LSTM based building energy consumption prediction method of claim 1 where the data affecting building load includes temperature, humidity, solar radiation, wind speed and air conditioning load actual data.
10. An improved LSTM based building energy consumption prediction system, comprising:
an optimal parameter acquisition unit: the method comprises the steps of obtaining optimal parameters corresponding to the LSTM neural network;
dividing data into units: the LSTM variant neural network prediction model is used for introducing the optimal parameters into the LSTM variant neural network, optimizing the super parameters in the LSTM variant neural network by using a random gradient optimization algorithm based on weight attenuation to obtain the optimal super parameters of the LSTM variant neural network, and taking the LSTM variant neural network corresponding to the optimal super parameters as the optimal LSTM prediction model;
a data application unit: the method is used for processing the collected data influencing the building load by using the optimal LSTM prediction model, predicting the load data of the building at the appointed time and realizing the prediction of the building energy consumption.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210265853.2A CN114742278A (en) | 2022-03-17 | 2022-03-17 | Building energy consumption prediction method and system based on improved LSTM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210265853.2A CN114742278A (en) | 2022-03-17 | 2022-03-17 | Building energy consumption prediction method and system based on improved LSTM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114742278A true CN114742278A (en) | 2022-07-12 |
Family
ID=82277721
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210265853.2A Pending CN114742278A (en) | 2022-03-17 | 2022-03-17 | Building energy consumption prediction method and system based on improved LSTM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114742278A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115115284A (en) * | 2022-08-29 | 2022-09-27 | 同方德诚(山东)科技股份公司 | Energy consumption analysis method based on neural network |
CN115271256A (en) * | 2022-09-20 | 2022-11-01 | 华东交通大学 | Intelligent ordering method under multi-dimensional classification |
CN115511197A (en) * | 2022-10-11 | 2022-12-23 | 呼伦贝尔安泰热电有限责任公司海拉尔热电厂 | Heat supply load prediction method for heat exchange station in alpine region |
CN115841186A (en) * | 2022-12-23 | 2023-03-24 | 国网山东省电力公司东营供电公司 | Industrial park load short-term prediction method based on regression model |
CN116070881A (en) * | 2023-03-13 | 2023-05-05 | 淮阴工学院 | Intelligent energy consumption scheduling method and device for modern industrial production area |
CN116187584A (en) * | 2023-04-19 | 2023-05-30 | 深圳大学 | Building carbon footprint prediction method and system based on gradient descent algorithm |
CN116861248A (en) * | 2023-07-21 | 2023-10-10 | 浙江大学 | Building energy consumption prediction method and system combining multi-window fusion method and focusing framework model |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239859A (en) * | 2017-06-05 | 2017-10-10 | 国网山东省电力公司电力科学研究院 | The heating load forecasting method of Recognition with Recurrent Neural Network is remembered based on series connection shot and long term |
CN109685252A (en) * | 2018-11-30 | 2019-04-26 | 西安工程大学 | Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model |
CN112101521A (en) * | 2020-08-13 | 2020-12-18 | 国网辽宁省电力有限公司电力科学研究院 | Building energy consumption prediction method based on long-term and short-term memory network hybrid model |
KR20210050892A (en) * | 2019-10-29 | 2021-05-10 | 중앙대학교 산학협력단 | Deep learning method using adaptive weight-decay |
CN113191529A (en) * | 2021-04-07 | 2021-07-30 | 武汉科技大学 | New building energy consumption prediction method based on transfer learning deep confrontation neural network |
CN113850438A (en) * | 2021-09-29 | 2021-12-28 | 西安建筑科技大学 | Public building energy consumption prediction method, system, equipment and medium |
-
2022
- 2022-03-17 CN CN202210265853.2A patent/CN114742278A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239859A (en) * | 2017-06-05 | 2017-10-10 | 国网山东省电力公司电力科学研究院 | The heating load forecasting method of Recognition with Recurrent Neural Network is remembered based on series connection shot and long term |
CN109685252A (en) * | 2018-11-30 | 2019-04-26 | 西安工程大学 | Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model |
KR20210050892A (en) * | 2019-10-29 | 2021-05-10 | 중앙대학교 산학협력단 | Deep learning method using adaptive weight-decay |
CN112101521A (en) * | 2020-08-13 | 2020-12-18 | 国网辽宁省电力有限公司电力科学研究院 | Building energy consumption prediction method based on long-term and short-term memory network hybrid model |
CN113191529A (en) * | 2021-04-07 | 2021-07-30 | 武汉科技大学 | New building energy consumption prediction method based on transfer learning deep confrontation neural network |
CN113850438A (en) * | 2021-09-29 | 2021-12-28 | 西安建筑科技大学 | Public building energy consumption prediction method, system, equipment and medium |
Non-Patent Citations (1)
Title |
---|
YU JUNQI: "Research on energy consumption prediction of office buildings base on comprehensive similar day and ensemble learning", JOURNAL OF INTELLIGENT&FUZZY SYSTEMS, vol. 40, no. 6, 16 July 2021 (2021-07-16), pages 11951 - 11965 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115115284A (en) * | 2022-08-29 | 2022-09-27 | 同方德诚(山东)科技股份公司 | Energy consumption analysis method based on neural network |
CN115115284B (en) * | 2022-08-29 | 2022-11-15 | 同方德诚(山东)科技股份公司 | Energy consumption analysis method based on neural network |
CN115271256A (en) * | 2022-09-20 | 2022-11-01 | 华东交通大学 | Intelligent ordering method under multi-dimensional classification |
CN115271256B (en) * | 2022-09-20 | 2022-12-16 | 华东交通大学 | Intelligent ordering method under multi-dimensional classification |
CN115511197B (en) * | 2022-10-11 | 2023-09-08 | 呼伦贝尔安泰热电有限责任公司海拉尔热电厂 | Heat supply load prediction method for heat exchange station in alpine region |
CN115511197A (en) * | 2022-10-11 | 2022-12-23 | 呼伦贝尔安泰热电有限责任公司海拉尔热电厂 | Heat supply load prediction method for heat exchange station in alpine region |
CN115841186A (en) * | 2022-12-23 | 2023-03-24 | 国网山东省电力公司东营供电公司 | Industrial park load short-term prediction method based on regression model |
CN116070881A (en) * | 2023-03-13 | 2023-05-05 | 淮阴工学院 | Intelligent energy consumption scheduling method and device for modern industrial production area |
CN116070881B (en) * | 2023-03-13 | 2023-09-29 | 淮阴工学院 | Intelligent energy consumption scheduling method and device for modern industrial production area |
CN116187584A (en) * | 2023-04-19 | 2023-05-30 | 深圳大学 | Building carbon footprint prediction method and system based on gradient descent algorithm |
CN116187584B (en) * | 2023-04-19 | 2023-09-05 | 深圳大学 | Building carbon footprint prediction method and system based on gradient descent algorithm |
CN116861248A (en) * | 2023-07-21 | 2023-10-10 | 浙江大学 | Building energy consumption prediction method and system combining multi-window fusion method and focusing framework model |
CN116861248B (en) * | 2023-07-21 | 2024-02-27 | 浙江大学 | Building energy consumption prediction method and system combining multi-window fusion method and focusing framework model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114742278A (en) | Building energy consumption prediction method and system based on improved LSTM | |
Somu et al. | A hybrid model for building energy consumption forecasting using long short term memory networks | |
CN109685252B (en) | Building energy consumption prediction method based on cyclic neural network and multi-task learning model | |
CN108280551B (en) | Photovoltaic power generation power prediction method utilizing long-term and short-term memory network | |
CN111260136A (en) | Building short-term load prediction method based on ARIMA-LSTM combined model | |
CN111563610A (en) | LSTM neural network-based building electrical load comprehensive prediction method and system | |
CN114119273B (en) | Non-invasive load decomposition method and system for park comprehensive energy system | |
CN109242265B (en) | Urban water demand combined prediction method based on least square sum of errors | |
CN112101521A (en) | Building energy consumption prediction method based on long-term and short-term memory network hybrid model | |
CN113554466A (en) | Short-term power consumption prediction model construction method, prediction method and device | |
CN105160441B (en) | It is transfinited the real-time electric power load forecasting method of vector regression integrated network based on increment type | |
Dong et al. | Short-term building cooling load prediction model based on DwdAdam-ILSTM algorithm: A case study of a commercial building | |
CN112949894B (en) | Output water BOD prediction method based on simplified long-short-term memory neural network | |
CN113325721A (en) | Model-free adaptive control method and system for industrial system | |
CN114648147A (en) | IPSO-LSTM-based wind power prediction method | |
CN113591957B (en) | Wind power output short-term rolling prediction and correction method based on LSTM and Markov chain | |
Wu et al. | Short-term electric load forecasting model based on PSO-BP | |
Kumar et al. | Forecasting indoor temperature for smart buildings with ARIMA, SARIMAX, and LSTM: A fusion approach | |
Nghiem et al. | Applying Bayesian inference in a hybrid CNN-LSTM model for time-series prediction | |
Ibrahim et al. | LSTM neural network model for ultra-short-term distribution zone substation peak demand prediction | |
CN117767262A (en) | Photovoltaic power generation capacity prediction method for optimizing CNN-LSTM (carbon nano-tube-laser-based) based on improved firefly algorithm | |
CN115545503B (en) | Power load medium-short term prediction method and system based on parallel time sequence convolutional neural network | |
CN116565850A (en) | Wind power ultra-short-term prediction method based on QR-BLSTM | |
CN110659775A (en) | LSTM-based improved electric power short-time load prediction algorithm | |
CN114234392B (en) | Air conditioner load fine prediction method based on improved PSO-LSTM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |