CN111275169A - Method for predicting building thermal load in short time - Google Patents
Method for predicting building thermal load in short time Download PDFInfo
- Publication number
- CN111275169A CN111275169A CN202010055455.9A CN202010055455A CN111275169A CN 111275169 A CN111275169 A CN 111275169A CN 202010055455 A CN202010055455 A CN 202010055455A CN 111275169 A CN111275169 A CN 111275169A
- Authority
- CN
- China
- Prior art keywords
- data
- output
- input
- gate
- load
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000007246 mechanism Effects 0.000 claims abstract description 17
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 230000015654 memory Effects 0.000 claims abstract description 6
- 238000010438 heat treatment Methods 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 210000004027 cell Anatomy 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 230000008439 repair process Effects 0.000 claims description 3
- 125000004432 carbon atom Chemical group C* 0.000 claims description 2
- 230000001413 cellular effect Effects 0.000 claims description 2
- 210000002569 neuron Anatomy 0.000 claims description 2
- 238000011524 similarity measure Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Strategic Management (AREA)
- Artificial Intelligence (AREA)
- Human Resources & Organizations (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method for predicting the thermal load of a building in a short period, which comprises the steps of firstly selecting meteorological data and heating load data in a given time period, and constructing a data set as an input variable; preprocessing the data of the input variable; performing feature extraction on the preprocessed input variables by using a long-short term memory (LSTM) network to realize high-level feature learning; adding an Attention mechanism in an LSTM network, and giving greater weight to key features influencing output variables in the output features of the LSTM network by distributing Attention weights; and finally, operating an LSTM network to realize the prediction of the heat load of the building in a short period. The method can solve the problems that the traditional heat load prediction method cannot deeply mine data characteristics and information is lost when an LSTM network processes a long sequence, so that the heat load prediction precision is improved.
Description
Technical Field
The invention relates to the technical field of heating system thermal operation regulation and control, in particular to a method for predicting a building thermal load in a short time.
Background
The heat load prediction is a process of calculating and predicting future heat loads by analyzing and mining historical data.
In the prior art, the traditional heat load prediction method mainly comprises a time sequence, regression, grey prediction, a BP neural network, a support vector machine and the like, and although the models can predict the heat load, the models belong to shallow layer mining methods and cannot deeply mine the randomness and nonlinear characteristics of heat load data, so that the prediction accuracy is low. The Long Short-Term Memory network (LSTM) is an improved recurrent neural network, is an important research result in the field of deep learning, and can deeply mine heat load data by using the LSTM network to predict heat load.
Disclosure of Invention
The invention aims to provide a method for predicting the heat load of a building in a short period, which can solve the problems that the traditional heat load prediction method cannot deeply mine data characteristics and information is lost when an LSTM network processes a long sequence, thereby improving the accuracy of heat load prediction.
The purpose of the invention is realized by the following technical scheme:
a method of short-term building thermal load prediction, the method comprising:
and 5, inputting the output characteristics subjected to the assignment of the weights by the Attention mechanism in the step 4 into an LSTM network, repeating the operation in the step 3, and outputting a prediction result to realize the prediction of the heat load of the building in a short period of time.
The technical scheme provided by the invention can show that the method can solve the problems that the traditional heat load prediction method cannot deeply mine data characteristics and information is lost when an LSTM network processes a long sequence, thereby improving the accuracy of heat load prediction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for predicting the thermal load of a building for a short period of time according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a standard LSTM network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of data preprocessing performed on a case data set according to the present invention;
FIG. 4 is a schematic diagram of a predicted value of the thermal load obtained through an LSTM network based on an attention mechanism according to an example of the present invention;
FIG. 5 is a diagram illustrating the relative error between the predicted value and the actual value of the thermal load according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the present invention will be further described in detail with reference to the accompanying drawings, and fig. 1 is a schematic flow chart of a method for predicting a short-term building thermal load according to the embodiment of the present invention, where the method includes:
in this step, since the data set as the input variable is missing for various reasons, and data with a large deviation or null data is generated, the data is preprocessed to reduce the influence of individual bad data on the prediction accuracy. Finally, the repaired data is standardized, so that the subsequent processing is facilitated, and the specific process is as follows:
1) identifying bad data in the input variable data, first calculating the mean and variance of the data using the following equation 1:
in the above formula, Qm,iThe load value at the ith moment of the mth day;respectively, the mean and variance thereof; m is the total number of days;
and then based on the 3 sigma principle, the bad data identification is realized by using the following formula 2:
in the above formula, σiIs composed ofStandard deviation of (d); epsilon is a set threshold value, and is usually 1-1.5;
if the load data does not satisfy the formula 2, the load data is normal data and is reserved; if the load data satisfies equation 2, it is determined as bad data, and the subsequent correction processing is performed.
2) And repairing the identified bad data, wherein a formula for repairing the bad data is as follows:
in the above formula, the first and second carbon atoms are,the load correction value at the ith moment of the mth day; qm±1,iIs Qm,iLoad values at the ith moment of previous and next 2 similar days;represents Qm,iThe load values of the ith moment of the front and back 2 similar days are α, gamma is a weight coefficient;
3) and then carrying out data standardization, specifically, carrying out min-max standardization on input variable data to enable the standardized results to be all between 0 and 1, further unifying the dimensions of the data, and eliminating the influence of dimension difference between the data on the prediction result, wherein the calculation formula of the min-max standardization is as follows:
in the above formula, x represents input variable data; x is the number ofminRepresents the minimum value of the input variable data; x is the number ofmaxRepresents the maximum value of the input variable data; y represents normalized data.
in this step, as shown in fig. 2, a schematic structural diagram of a standard LSTM network according to an embodiment of the present invention is shown, and with reference to fig. 2: the long-short term memory LSTM network introduces three judgment conditions of a forgetting gate, an input gate and an output gate to the neuron, and the input gate and the output gate are used for reading, outputting and correcting parameters; the forgetting gate is used for controlling whether to forget the state of the hidden cell on the previous layer with a certain probability, and the calculation formula of the forgetting gate is as follows:
ft=σ(Wf·[ht-1,xt]+bf) (5)
in the above formula, WfIs a weight matrix for a forgetting gate; σ is a sigmoid function; bfIs a biased term for a forgetting gate; h ist-1And xtRespectively representing the output of the last cell and the input of the current cell; [ h ] oft-1,xt]Means to concatenate two vectors into one longer vector; f. oftAn output representing a forgetting gate;
the input gate is composed of two parts, the first part uses a sigma activation function, and the output is it(ii) a The second part uses the tanh activation function and the output is atThe mathematical expression of the output results of the two is as follows:
it=σ(Wi·[ht-1,xt]+bi) (6)
at=tanh(Wa·[ht-1,xt]+ba) (7)
in the above formula, WiIs the weight matrix of the input gate; biIs the offset term of the input gate; i.e. itRepresents the output of the input gate; waAnd baRespectively alternative cellular states a for renewaltThe weight matrix and bias terms of;
before studying the LSTM output gate, the LSTM cell state is examined, and the outputs of the forgetting gate and the input gate act on the new cell state CtNew cell state CtConsisting of two parts, the first of which is the old cell state Ct-1And the output f of the forgetting gatetThe second part being the output i of the input gatetAnd atIs specifically expressed as:
Ct=ftCt-1+itat(8)
has a new cell state CtThe output gate can be studied, the hidden state htThe update of (2) is composed of two parts, the first part is the output o of the output gatetThe second part is composed of a new cell state CtAnd the tan h activation function, wherein the calculation formula of the two parts is as follows:
ot=σ(Wo·[ht-1,xt]+bo) (9)
ht=ottanh Ct(10)
in the above formula, WoIs a weight matrix of the output gates; boIs the bias term of the output gate; otRepresenting the output of the output gate.
in this step, the Attention mechanism is added by retaining the intermediate output results of the LSTM encoder on the input sequences, then training a model to selectively learn these input sequences, and associating the output sequences with the model as it is output; the Attention mechanism is specifically a similarity measure, and if the current input is approximately similar to the target state, the weight of the current input is larger, and the model is a model simulating the Attention of the human brain.
And 5, inputting the output characteristics subjected to the assignment of the weights by the Attention mechanism in the step 4 into an LSTM network, repeating the operation in the step 3, and outputting a prediction result to realize the prediction of the heat load of the building in a short period of time.
The following describes in detail the implementation of the above method with a specific example, in this example, the data sets used are meteorological data and thermal load data of each hour from 11/30/2018 to 4/5/2019 of a heat exchange station in the Qingdao, and the specific process is as follows:
firstly, data preprocessing is carried out on a data set, wherein one-hot single-hot encoding processing is carried out on discrete weather data in meteorological data by utilizing get _ dummy, and then correlation data analysis is carried out, and as shown in FIG. 3, a schematic diagram for carrying out data preprocessing on a case data set provided by the invention is shown;
data set partitioning: data from 12/15/00/2018 to 12/31/23/00/2018 are divided into training sets, and data from 1/00/2019 to 31/2018 are divided into testing sets. Taking an attention mechanism as an interface of 2 LSTM networks, firstly processing an input sequence through one LSTM network to realize high-level feature learning, wherein input and output activation functions all adopt 'relu'; secondly, an attention mechanism reasonably distributes attention weight to the output characteristics of the LSTM network, and finally, an LSTM network is operated to realize short-term building heat load prediction. The model setting of the LSTM network is specifically as follows:
defining a model: using the input _ dim parameter, set it to 9, which is consistent with the dimensionality of the data; the sense class is used to define the fully connected layer.
Compiling the model: compiling the model makes it possible to efficiently use Keras-encapsulated numerical operations, assigning a loss function (loss) for evaluating a set of weights as mean _ squared _ error, and an optimizer (optimizer) for searching the network for different weights as Adam.
Training a model: the model needs to be trained before it can be used to predict new data. Training model by calling the model fit () function, the training process sets the epochs parameter to 3000 and the batch _ size to data _ train.
Finally, short-term building heat load prediction is achieved by using the trained model, as shown in fig. 4, a schematic diagram of a heat load prediction value obtained by an LSTM network based on an attention mechanism in the example of the invention is shown in fig. 5, a schematic diagram of a relative error between the heat load prediction value and a true value in the example of the invention is shown, and as can be seen from fig. 4 and 5, the LSTM network based on the attention mechanism has a good effect in the short-term building heat load prediction.
It is noted that those skilled in the art will recognize that embodiments of the present invention are not described in detail herein.
In summary, the method according to the embodiment of the present invention deeply mines the data features of the data set through the LSTM network, breaks the limitation that the conventional encoder-decoder structure depends on an internal fixed length vector during encoding and decoding by using the introduced Attention mechanism, and reasonably allocates the Attention weight to the output features of the LSTM network, thereby improving the accuracy of the thermal load prediction.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (4)
1. A method of short-term building thermal load prediction, the method comprising:
step 1, selecting meteorological data and heating load data in a given time period, and constructing a data set as an input variable;
step 2, preprocessing the data of the input variable, specifically including identification and repair of bad data and standardization processing of data;
step 3, performing feature extraction on the preprocessed input variables by using a long-short term memory (LSTM) network to realize high-level feature learning;
step 4, adding an Attention mechanism in the LSTM network, and giving greater weight to key features influencing output variables in the output features of the LSTM network by distributing Attention weights; the output variable refers to heat load prediction data lagging behind the input variable by m periods;
and 5, inputting the output characteristics subjected to the assignment of the weights by the Attention mechanism in the step 4 into an LSTM network, repeating the operation in the step 3, and outputting a prediction result to realize the prediction of the heat load of the building in a short period of time.
2. The method for predicting the thermal load of the building for the short time as claimed in claim 1, wherein the process of the step 2 is specifically as follows:
1) identifying bad data in the input variable data, first calculating the mean and variance of the data using the following equation 1:
in the above formula, Qm,iThe load value at the ith moment of the mth day;respectively, the mean and variance thereof; m is the total number of days;
and then based on the 3 sigma principle, the bad data identification is realized by using the following formula 2:
if the load data does not satisfy the formula 2, the load data is normal data and is reserved; if the load data meets the formula 2, judging that the data is bad, and performing subsequent correction processing;
2) and repairing the identified bad data, wherein a formula for repairing the bad data is as follows:
in the above formula, the first and second carbon atoms are,the load correction value at the ith moment of the mth day; qm±1,iIs Qm,iLoad values at the ith moment of previous and next 2 similar days;represents Qm,iThe load values of the ith moment of the front and back 2 similar days are α, gamma is a weight coefficient;
3) and then carrying out data standardization, specifically, carrying out min-max standardization on input variable data to enable the standardized results to be all between 0 and 1, further unifying the dimensions of the data, and eliminating the influence of dimension difference between the data on the prediction result, wherein the calculation formula of the min-max standardization is as follows:
in the above formula, x represents input variable data; x is the number ofminRepresents the minimum value of the input variable data; x is the number ofmaxRepresents the maximum value of the input variable data; y represents normalized data.
3. The method for predicting the thermal load of the building for the short time as claimed in claim 1, wherein the process of the step 3 is specifically as follows:
the long-short term memory LSTM network introduces three judgment conditions of a forgetting gate, an input gate and an output gate to the neuron, and the input gate and the output gate are used for reading, outputting and correcting parameters; the forgetting gate is used for controlling whether to forget the state of the hidden cell on the previous layer with a certain probability, and the calculation formula of the forgetting gate is as follows:
ft=σ(Wf·[ht-1,xt]+bf) (5)
in the above formula, WfIs a weight matrix for a forgetting gate; σ is a sigmoid function; bfIs a biased term for a forgetting gate; h ist-1And xtRespectively representing the output of the last cell and the input of the current cell; [ h ] oft-1,xt]Means to concatenate two vectors into one longer vector; f. oftAn output representing a forgetting gate;
the input gate is composed of two parts, the first part uses a sigma activation function, and the output is it(ii) a The second part uses the tanh activation function and the output is atThe mathematical expression of the output results of the two is as follows:
it=σ(Wi·[ht-1,xt]+bi) (6)
at=tanh(Wa·[ht-1,xt]+ba) (7)
in the above formula, WiIs the weight matrix of the input gate; biIs the offset term of the input gate; i.e. itRepresents the output of the input gate; waAnd baRespectively alternative cellular states a for renewaltThe weight matrix and bias terms of;
the outputs of both the forgetting gate and the input gate act on the new cell state CtNew cell state CtConsisting of two parts, the first of which is the old cell state Ct-1And the output f of the forgetting gatetThe second part being the output i of the input gatetAnd atIs specifically expressed as:
Ct=ftCt-1+itat(8)
hidden cell state htThe update of (2) being composed of two parts, the first part being of the output gateOutput otThe second part is composed of a new cell state CtAnd the tan h activation function, wherein the calculation formula of the two parts is as follows:
ot=σ(Wo·[ht-1,xt]+bo) (9)
ht=ottanhCt(10)
in the above formula, WoIs a weight matrix of the output gates; boIs the bias term of the output gate; otRepresenting the output of the output gate.
4. The method for short-term building thermal load prediction according to claim 1, characterized in that, in step 4,
the added Attention mechanism is to selectively learn the input sequences by keeping the intermediate output results of the LSTM encoder on the input sequences, then training a model, and associating the output sequences with the model when the model is output;
the Attention mechanism is specifically a similarity measure, and if the current input is approximately similar to the target state, the weight of the current input will be larger.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010055455.9A CN111275169A (en) | 2020-01-17 | 2020-01-17 | Method for predicting building thermal load in short time |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010055455.9A CN111275169A (en) | 2020-01-17 | 2020-01-17 | Method for predicting building thermal load in short time |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111275169A true CN111275169A (en) | 2020-06-12 |
Family
ID=71003299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010055455.9A Pending CN111275169A (en) | 2020-01-17 | 2020-01-17 | Method for predicting building thermal load in short time |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111275169A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931989A (en) * | 2020-07-10 | 2020-11-13 | 国网浙江省电力有限公司绍兴供电公司 | Power system short-term load prediction method based on deep learning neural network |
CN112747477A (en) * | 2021-01-25 | 2021-05-04 | 华南理工大学 | Intelligent control system and method for safe and energy-saving gas-fired hot water boiler |
CN113052214A (en) * | 2021-03-14 | 2021-06-29 | 北京工业大学 | Heat exchange station ultra-short term heat load prediction method based on long and short term time series network |
CN113657660A (en) * | 2021-08-12 | 2021-11-16 | 杭州英集动力科技有限公司 | Heat source load prediction method based on substation load and heat supply network hysteresis model |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050192915A1 (en) * | 2004-02-27 | 2005-09-01 | Osman Ahmed | System and method for predicting building thermal loads |
CN107239859A (en) * | 2017-06-05 | 2017-10-10 | 国网山东省电力公司电力科学研究院 | The heating load forecasting method of Recognition with Recurrent Neural Network is remembered based on series connection shot and long term |
CN108921341A (en) * | 2018-06-26 | 2018-11-30 | 国网山东省电力公司电力科学研究院 | A kind of steam power plant's short term thermal load forecasting method encoded certainly based on gate |
CN110633867A (en) * | 2019-09-23 | 2019-12-31 | 国家电网有限公司 | Ultra-short-term load prediction model based on GRU and attention mechanism |
-
2020
- 2020-01-17 CN CN202010055455.9A patent/CN111275169A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050192915A1 (en) * | 2004-02-27 | 2005-09-01 | Osman Ahmed | System and method for predicting building thermal loads |
CN107239859A (en) * | 2017-06-05 | 2017-10-10 | 国网山东省电力公司电力科学研究院 | The heating load forecasting method of Recognition with Recurrent Neural Network is remembered based on series connection shot and long term |
CN108921341A (en) * | 2018-06-26 | 2018-11-30 | 国网山东省电力公司电力科学研究院 | A kind of steam power plant's short term thermal load forecasting method encoded certainly based on gate |
CN110633867A (en) * | 2019-09-23 | 2019-12-31 | 国家电网有限公司 | Ultra-short-term load prediction model based on GRU and attention mechanism |
Non-Patent Citations (4)
Title |
---|
一杯冰拿铁: "深度学习中 的 Attention机制", pages 1 - 2, Retrieved from the Internet <URL:https://blog.csdn.net/guohao_zhang/article/details/79540014?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522170136256316800215030474%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&request_id=170136256316800215030474&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~rank_v31_ecpm-2-79540014-null-null.nonecase&utm_term=%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0&spm=1018.2226.3001.4450> * |
一杯冰拿铁: "深度学习中的Attention机制", pages 2 * |
李昭昱等: "基于attention机制的LSTM 神经网络超短期负荷预测方法", pages 2 - 3 * |
李鹏辉等: "基于ARIMA LSTM组合模型的楼宇短期负荷预测方法研究", no. 06 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931989A (en) * | 2020-07-10 | 2020-11-13 | 国网浙江省电力有限公司绍兴供电公司 | Power system short-term load prediction method based on deep learning neural network |
CN112747477A (en) * | 2021-01-25 | 2021-05-04 | 华南理工大学 | Intelligent control system and method for safe and energy-saving gas-fired hot water boiler |
CN112747477B (en) * | 2021-01-25 | 2022-09-20 | 华南理工大学 | Intelligent control system and method for safe and energy-saving gas-fired hot water boiler |
CN113052214A (en) * | 2021-03-14 | 2021-06-29 | 北京工业大学 | Heat exchange station ultra-short term heat load prediction method based on long and short term time series network |
CN113052214B (en) * | 2021-03-14 | 2024-05-28 | 北京工业大学 | Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network |
CN113657660A (en) * | 2021-08-12 | 2021-11-16 | 杭州英集动力科技有限公司 | Heat source load prediction method based on substation load and heat supply network hysteresis model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111275169A (en) | Method for predicting building thermal load in short time | |
CN110298501B (en) | Electrical load prediction method based on long-time and short-time memory neural network | |
CN110942101B (en) | Rolling bearing residual life prediction method based on depth generation type countermeasure network | |
CN109886444B (en) | Short-time traffic passenger flow prediction method, device, equipment and storage medium | |
CN109685252B (en) | Building energy consumption prediction method based on cyclic neural network and multi-task learning model | |
CN107544904B (en) | Software reliability prediction method based on deep CG-LSTM neural network | |
CN109146156B (en) | Method for predicting charging amount of charging pile system | |
CN111091196B (en) | Passenger flow data determination method and device, computer equipment and storage medium | |
CN112819136A (en) | Time sequence prediction method and system based on CNN-LSTM neural network model and ARIMA model | |
CN110766212A (en) | Ultra-short-term photovoltaic power prediction method for historical data missing electric field | |
CN113852432B (en) | Spectrum Prediction Sensing Method Based on RCS-GRU Model | |
CN112329990A (en) | User power load prediction method based on LSTM-BP neural network | |
Dong et al. | An integrated deep neural network approach for large-scale water quality time series prediction | |
CN113449919B (en) | Power consumption prediction method and system based on feature and trend perception | |
CN113537591A (en) | Long-term weather prediction method and device, computer equipment and storage medium | |
CN114266201B (en) | Self-attention elevator trapping prediction method based on deep learning | |
CN116227180A (en) | Data-driven-based intelligent decision-making method for unit combination | |
CN110222910B (en) | Active power distribution network situation prediction method and prediction system | |
CN117709540A (en) | Short-term bus load prediction method and system for identifying abnormal weather | |
CN109102698B (en) | Method for predicting short-term traffic flow in road network based on integrated LSSVR model | |
CN116628444A (en) | Water quality early warning method based on improved meta-learning | |
CN116822722A (en) | Water level prediction method, system, device, electronic equipment and medium | |
CN116739130A (en) | Multi-time scale load prediction method of TCN-BiLSTM network | |
CN113962431B (en) | Bus load prediction method for two-stage feature processing | |
CN116415485A (en) | Multi-source domain migration learning residual service life prediction method based on dynamic distribution self-adaption |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |