CN113052214B - Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network - Google Patents
Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network Download PDFInfo
- Publication number
- CN113052214B CN113052214B CN202110274414.3A CN202110274414A CN113052214B CN 113052214 B CN113052214 B CN 113052214B CN 202110274414 A CN202110274414 A CN 202110274414A CN 113052214 B CN113052214 B CN 113052214B
- Authority
- CN
- China
- Prior art keywords
- data
- model
- short
- layer
- term
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000007637 random forest analysis Methods 0.000 claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 8
- 230000009467 reduction Effects 0.000 claims abstract description 6
- 238000012216 screening Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 9
- 238000010438 heat treatment Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 8
- 230000007774 longterm Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000003066 decision tree Methods 0.000 claims description 4
- 210000004027 cell Anatomy 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 210000002569 neuron Anatomy 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 8
- 125000004122 cyclic group Chemical group 0.000 description 7
- 230000000737 periodic effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000004134 energy conservation Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- YHXISWVBGDMDLQ-UHFFFAOYSA-N moclobemide Chemical compound C1=CC(Cl)=CC=C1C(=O)NCCN1CCOCC1 YHXISWVBGDMDLQ-UHFFFAOYSA-N 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Business, Economics & Management (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Human Resources & Organizations (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a heat exchange station ultra-short-term heat load prediction method. Firstly, screening and dimension reduction are carried out on the characteristics by utilizing a random forest algorithm; then, carrying out standardized processing on the data; and then, a thermal load prediction model based on a long-short-period time sequence network is established, the model captures long-short-period characteristic information through a convolution layer and a circulation layer, then, the concept of a circulation jump layer is introduced, longer-period characteristic information is captured, and meanwhile, an autoregressive algorithm is utilized to add linear processing capacity to the model, so that the robustness of the model is enhanced. The method solves the problem of information loss of the neural network when processing long-sequence data by utilizing the cycle characteristic of the time-by-time load, thereby improving the performance of model prediction.
Description
Technical Field
The invention relates to the technical field of central heating, in particular to a method for predicting ultra-short-term heat load of a heat exchange station. The invention relates to a specific application of a data-driven method in the field of heat load prediction in a central heating process.
Background
With the continuous development of China economic society, the level of urbanization is continuously improved, and central heating is gradually covered in the northern China cities and rural areas. According to the national statistical bureau, the central heating area of the city of China reaches 87.80 hundred million square meters by 2018, and is increased by 5.67 percent compared with the end of 2017. In order to realize energy conservation and environmental protection, avoid uneven heat supply, and heat load prediction becomes an important research problem. And by researching a heat supply company, the heat supply area of the heat supply company is about 350 ten thousand square meters, and if the temperature is reduced by 0.5 ℃ in the heat supply process, the approximately ten million yuan can be saved. Therefore, the heat load prediction has very important practical significance from the viewpoints of energy conservation, environmental protection and economic benefit. The central heating system is a nonlinear and large-scale system, comprises a plurality of valves and pumps, and is difficult to establish an accurate mathematical model, so that the data-driven method is more suitable for the field of heat load prediction.
The invention is mainly designed for the task of heat exchange station ultra-short term heat load prediction. The heat exchange station is directly connected with heat users, distributes and distributes heat, and the heat supply company directly regulates and controls the heat exchange station. In real life, the heat exchange station is closer to the district heat user, and the lag period of heat supply is close to 1 hour. Therefore, the heat exchange station and the ultra-short period are respectively used as the object of research and the time dimension, and have better practical significance.
The traditional heat load prediction method mainly comprises gray prediction, time sequence prediction, regression and other methods, and along with the continuous development of intelligent algorithms, a plurality of machine learning and neural network algorithms are applied to the field, such as: support vector regression (Support Vector Regression, SVR), recurrent neural networks (Recurrent Neural Network, RNN), long Short-Term Memory (LSTM), and the like. However, the above methods are all based on the time sequence characteristics of thermal load, when the input sequence is longer, the gradient disappears, so that information is easily lost, the correlation of longer-term information is lost, and the prediction accuracy is required to be improved.
Disclosure of Invention
In order to solve the problem that the neural Network is easy to lose longer-term information when processing Long-term information, the invention provides a Long Short-term Time-series Network (LSTNet) model to cope with the problem. The thermal load is a typical time series problem and is periodic from time to time. The long-short-period time sequence network model provided by the invention utilizes the characteristic, introduces the idea of cyclic jump, and can effectively solve the problem of information loss. Firstly, screening and dimension-reducing features by using a Random Forest (RF) algorithm; and then, a thermal load prediction model based on a long-short-period time sequence network is established, long-short-period characteristic information is captured through a convolution layer and a circulation layer, then, the concept of a circulation jump layer is introduced, longer-period characteristic information is captured, and meanwhile, the autoregressive algorithm is utilized to add linear processing capacity to the model, so that the robustness of the model is enhanced.
The invention adopts the following technical scheme and implementation steps:
S1, meteorological data and heat supply data in a certain time period are selected, and a dataset is constructed to serve as an input variable X n;
S2, preprocessing data, wherein the preprocessing comprises the steps of identifying and correcting missing values and outliers, and carrying out standardized processing on the data;
S3, screening input variables by adopting an RF method, performing dimension reduction operation on the dataset to obtain X m, and using 8: the proportion of 2 is divided into a training set and a testing set;
s4, inputting training sets into LSTNet models one by one, and training weights and biases of the models:
s401, firstly capturing characteristic information of short-term local parts by adopting a convolution layer;
S402, capturing long-term macroscopic information by using a circulating layer, and outputting the long-term macroscopic information as h t R; simultaneous cyclic jump layer
Capturing longer-term information by using the periodic characteristic of the sequence, and outputting the information as h t S;
S403, connecting the output of the circulating layer and the output of the circulating jump layer in a full-connection layer mode to obtain the output of y t D.
S404, combining the output of the AR process, adding a linear component for prediction, enabling the model to capture the scale change of the input, enhancing the robustness of the model, and obtaining the output of y t A.
The S405 output module integrates the output of the neural network part and the output of the AR model,
And obtaining a final prediction model.
S5, inputting the test sets into the trained LSTNet model one by one to obtain a predicted value y i.
Advantageous effects
Compared with the prior art, the method fully considers the cycle characteristic of ultra-short-term heat load, and compensates the problem of information loss caused by gradient descent of the conventional neural network by introducing the concept of a cycle skip layer. Different from the traditional neural network algorithm, the method fully considers the periodic characteristic of the time-by-time heat load, has more representativeness, and can better complete the task of predicting the ultra-short-term heat load.
Drawings
FIG. 1 is a diagram of a model structure of the present invention;
FIG. 2 is a diagram showing simulated thermal load data according to the present invention;
FIG. 3 is a diagram showing the model predictive results of the present invention LSTNet;
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings and the specific embodiments of the invention, so that the technical features and advantages of the invention are more apparent.
S1, selecting as many relevant characteristic variable data as possible, wherein the relevant characteristic variable data can comprise meteorological data, operating condition data, thermal load data and the like, so as to construct a thermal load data set, and X n={x1,x2,…,xn is obtained, wherein n is the number of characteristic variables;
s2, after the data set is constructed, preprocessing operation is carried out on the data:
s201 compensates for the missing value, i.e. the value of data 0 or null, and calculates using the following equation:
xi=0.4xi-1+0.4xi+1+0.2xi+2 (1)
Wherein x i is the current missing value, and x i-1、xi+1 and x i+2 are the values of the last moment, the next moment and the next moment respectively;
S202, regarding outliers, namely values which exceed a specified range by more than 3 times, treating the values as missing values;
s203, carrying out standardization processing on each dimension of input variable, wherein the adopted calculation formula is as follows:
wherein y i is a normalized value; x i is the original value; and s represent the mean and variance of the raw data, respectively. The normalized data mean was 0, variance was 1, and dimensionless.
S3, screening and dimension reduction are carried out on feature variables by using an RF algorithm, the thought of feature importance assessment by using a random forest is simpler, mainly how much each feature contributes to each tree in the random forest is seen, then an average value is obtained, and finally the contribution sizes among different features are compared. Wherein the importance of a certain feature x is expressed as IMP, and the calculation method is as follows:
S301 calculates its Out-Of-Bag data error, denoted OOBError1, for each decision tree in the random forest using the corresponding Out-Of-Bag data (OOB), the calculation formula Of which is shown below:
And O is the total number of the data outside the bag, the data outside the bag is taken as input and is brought into a random forest classifier, the classifying and comparing are carried out on the O pieces of data by using the classifier, and the number of the statistical classification errors is set as X.
S302, randomly adding noise interference to the characteristic x of all samples of the out-of-bag data, and calculating out-of-bag data errors of the out-of-bag data again, wherein the error is recorded as OOBError2;
S303 assuming that there are N trees in the random forest, the importance IMP of the feature x is as shown in formula 1:
After noise is randomly added to a certain feature, the accuracy outside the bag is greatly reduced, which indicates that the feature has a great influence on the classification result of the sample, namely the importance of the feature is higher.
The invention uses random forest to sort the importance of feature variables in a descending order, then determines the deleting proportion, and eliminates the index with unimportant corresponding proportion from the current feature variables, thereby obtaining a new feature set, and the feature of the new feature set is X m={x1,x2,…,xm. Wherein m < n, the deleting proportion is determined according to the number of characteristic variables in the original data set. After dimensionality reduction of the dataset, the dataset is then scaled by 8: the scale of 2 is divided into training and test sets.
S4, inputting training set data into a LSTNet model one by one according to a time sequence, wherein the weight and bias of the training model are shown in the whole structure of the LSTNet model as shown in fig. 1:
The first module of the S401 network is a convolution layer, and the function of the layer is to extract characteristics and capture local short-term characteristic information. The convolution layer module is composed of a plurality of filters, wherein the width is omega, the height is m, and m is the same as the number of the features. The output of the ith filter is:
hi=ReLU(Wi*X+bi) (5)
Where h i of the output is a vector, reLU is an activation function, and ReLU (x) =max (0, x). * For convolution operations, W i and b i are the weight matrix and bias, respectively.
The S402 convolution layer module output is input to both the loop layer and the loop-jump layer of the second module. The loop layer employs a gated loop unit (Gate Recurrent Unit, GRU) in which a ReLU is used as an implicitly updated activation function. The hidden state output h t R of the unit at time t is:
Wherein z t and r t are the outputs of the update gate and reset gate, respectively, in the GRU neuron, and h t is the intermediate state output; sigma is the sigmoid activation function, x t is the input at this layer at time t, Is the product of elements; w, U and b are the weight matrix and bias of each gate cell, respectively. The output of this layer is the hidden state for each time step.
The GRU network can capture long-term history information, but because the gradient disappears, all the previous information cannot be saved, and therefore the correlation of the long-term information is lost. In the LSTNet model, this problem is solved by a jumping idea, which is based on periodic data, by means of a period p, which is a super parameter, very far time information is obtained. At the time of prediction t, the time data information of the last cycle, and the earlier cycle can be taken for prediction. Since one period is long, this type of dependency is difficult to capture by the cyclic unit, so that a cyclic network structure with a hopping connection is introduced, and the time span of the information stream can be extended to obtain longer-term data information. Its output at time tThe method comprises the following steps:
The input of this layer is the same as the cyclic layer, both being the output of the convolutional layer. Where p is the number of skipped hidden units, i.e. the period. The general period is easy to determine, and is judged according to engineering experience or data trend, and if the data is aperiodic or the periodicity is dynamically changed, the attention mechanism method can be quoted to dynamically update the period p.
S403, connecting the two layers, and combining the outputs of the two layers by using a full connection layer mode by the model. The output of the layer at time t is:
Wherein W R and W S are weights assigned to the cyclic layer and the cyclic skip layer, respectively, and b is a bias value.
In the actual data set, the input scale changes in an aperiodic manner, but the problem can significantly reduce the prediction accuracy of the neural network model because the neural network is insensitive to the input-output scale changes. In order to solve this deficiency, therefore, a linear part is added to the model, and a classical autoregressive (Autoregressive model, AR) model is adopted to enhance the robustness of the model. Output of the AR model at time tThe method comprises the following steps:
where q A is the input window size on the input matrix.
The S405 output module integrates the output of the neural network part and the output of the AR model, and the final output y i of the LSTNet model is obtained as follows:
where y i is the final predicted value of the model at time t.
S406, in the model training process, adopting a mean square error (Mean Square Error, MSE) function as a loss function, wherein the formula is as follows:
Where n is the number of valid data and y i and y i are the predicted value and the true value of the test, respectively.
S5, inputting the test sets into the trained LSTNet model one by one to obtain a predicted value y i.
To verify the effectiveness of the method, we use the normal data of one heating season for verification. The data obtained by simulating the 120 day heating process of Henan Zheng cell heat exchange station by energy plus software is shown in FIG. 2. The comparison experiments were performed by using AR, integrated moving average autoregressive models (Autoregressive Integrated Moving Average, ARIMA), MLR, SVR, GRU, and the like, the experimental results are shown in fig. 3, and the evaluation index results of the respective models are shown in table 1.
Table 1 comparison of evaluation index of each model for heat load prediction
Model | RMSE(×103) | MAE(×103) | R-Squared |
AR | 40.815 | 27.213 | 76.724% |
ARIMA | 33.892 | 19.028 | 83.951% |
MLR | 31.631 | 20.857 | 86.020% |
SVR | 29.220 | 18.662 | 88.070% |
GRU | 24.994 | 17.249 | 91.268% |
LSTNet | 15.833 | 12.341 | 96.501% |
From the above experimental results, it can be seen that for time-by-time thermal load prediction, the LSTNet model prediction performance utilized herein is better than other models, closer to 1 on the R-square index than other models. Compared with a GRU model, the LSTNet model in the model has the advantages that the RMSE is reduced by 36.7%, the MAE is reduced by 28.5%, and the model precision is obviously improved.
Claims (3)
1. A heat exchange station ultra-short-term load prediction method based on a long-short-period time sequence network is characterized by comprising the following steps:
S1: selecting meteorological data and heating data in a certain period of time, and constructing a data set as an input variable X n;
S2: preprocessing data, wherein the preprocessing comprises the steps of identifying and correcting missing values and outliers, and carrying out standardized processing on the data;
S3: screening input variables by adopting an RF method, performing dimension reduction operation on the data set to obtain X m, and using 8: the proportion of 2 is divided into a training set and a testing set;
S4: inputting the training set into LSTNet models one by one, and training weights and biases of the models to obtain trained network models;
S5: inputting the test set into the trained LSTNet model one by one to obtain a predicted value y i;
The LSTNet model proposed in the step S4, the specific method of the model includes:
S401: the first module of the network is a convolution layer, the layer is composed of a plurality of filters, and the output formula of the ith filter is as follows:
hi=ReLU(Wi*X+bi) (1)
Wherein h i is a vector, reLU is an activation function, reLU (x) =max (0, x) is convolution operation, and W i and b i are weight matrix and bias, respectively;
S402: the second module is a circulation layer and a circulation jump layer, which are used for obtaining the characteristic information of a long term and a longer term, and the hidden state output of the unit at the time t of the circulation layer and the circulation jump layer And/>The formulas of (a) are respectively as follows:
Wherein z t and r t are the outputs of the update gate and reset gate, respectively, in the GRU neuron, and h t is the intermediate state output; sigma is a sigmoid activation function, x t is the input at the layer at time t, and y is the element product; p is the number of skipped hidden units, i.e. the period; w, U and b are the weight matrix and bias of each gate cell, respectively;
s403: in order to connect the two layers, the model combines the output of the two layers in a full connection layer mode; the output formula of the layer at the time t is as follows:
Wherein W R and W S are weights respectively distributed to a circulating layer and a circulating jump layer, and b is a bias value;
S404: to capture the change in input scale, an AR process is added to the model, the output of which at time t The method comprises the following steps:
Where q A is the input window size on the input matrix;
S405: integrating the output of the neural network part and the output of the AR model to obtain a final predicted output y i of the model as follows:
wherein y i is the final predicted value of the model at time t;
S406: in the model training process, a mean square error function is adopted as a loss function, and the formula is as follows:
Where n is the number of valid data and y i and y i are the predicted value and the true value of the test, respectively.
2. The method for predicting the ultrashort-term load of a heat exchange station based on a long-short-term time sequence network according to claim 1, wherein the step S2 is performed with data preprocessing, and comprises the following steps:
S201: for missing values, the following formula may be used to calculate:
xi=0.4xi-1+0.4xi+1+0.2xi+2 (8)
Wherein x i is the current missing value, and x i-1、xi+1 and x i+2 are the values of the last moment, the next moment and the next moment respectively;
s202: treating an outlier, that is, a value exceeding a predetermined range by a factor of 3 or more, as a missing value;
S203: each dimension of data is standardized, and the standardized formula is:
wherein y i is a normalized value; x i is the original value; And s represent the mean and variance of the raw data, respectively; the normalized data mean was 0, variance was 1, and dimensionless.
3. The method for predicting ultra-short term load of heat exchange station based on long-short term time series network according to claim 1, wherein the method for performing the dimension reduction operation in step S3 comprises:
s301: calculating the out-of-bag data error of each decision tree in the random forest;
the corresponding out-of-bag data was used to calculate its out-of-bag data error, denoted OOBError1, whose calculation formula is shown below:
Wherein O is the total number of data outside the bag, the data outside the bag is taken as input and is brought into a random forest classifier, the classifying and comparing are carried out on the O pieces of data by using the classifier, and the number of the classifying errors is counted and is set as X;
s302: adding noise interference to the feature x, and calculating the out-of-bag data error of each decision tree in the random forest again;
S303: the importance IMP of each feature is calculated, and the calculation formula is as follows:
wherein OOBError and OOBError are the errors outside the bag before and after adding noise respectively; n is the total number of decision trees in the random forest.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110274414.3A CN113052214B (en) | 2021-03-14 | 2021-03-14 | Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110274414.3A CN113052214B (en) | 2021-03-14 | 2021-03-14 | Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113052214A CN113052214A (en) | 2021-06-29 |
CN113052214B true CN113052214B (en) | 2024-05-28 |
Family
ID=76512106
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110274414.3A Active CN113052214B (en) | 2021-03-14 | 2021-03-14 | Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113052214B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256036B (en) * | 2021-07-13 | 2021-10-12 | 国网浙江省电力有限公司 | Power supply cost analysis and prediction method based on Prophet-LSTNet combined model |
CN113821344B (en) * | 2021-09-18 | 2024-04-05 | 中山大学 | Cluster load prediction method and system based on machine learning |
CN114912169B (en) * | 2022-04-24 | 2024-05-31 | 浙江英集动力科技有限公司 | Industrial building heat supply autonomous optimization regulation and control method based on multisource information fusion |
CN115860270A (en) * | 2023-02-21 | 2023-03-28 | 保定博堃元信息科技有限公司 | Network supply load prediction system and method based on LSTM neural network |
CN118074112A (en) * | 2024-02-21 | 2024-05-24 | 北京智芯微电子科技有限公司 | Photovoltaic power prediction method based on similar day and long-short period time sequence network |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414747A (en) * | 2019-08-08 | 2019-11-05 | 东北大学秦皇岛分校 | A kind of space-time shot and long term urban human method for predicting based on deep learning |
CN110610232A (en) * | 2019-09-11 | 2019-12-24 | 南通大学 | Long-term and short-term traffic flow prediction model construction method based on deep learning |
CN110619430A (en) * | 2019-09-03 | 2019-12-27 | 大连理工大学 | Space-time attention mechanism method for traffic prediction |
CN111275169A (en) * | 2020-01-17 | 2020-06-12 | 北京石油化工学院 | Method for predicting building thermal load in short time |
CN111309577A (en) * | 2020-02-19 | 2020-06-19 | 北京工业大学 | Spark-oriented batch processing application execution time prediction model construction method |
AU2020101854A4 (en) * | 2020-08-17 | 2020-09-24 | China Communications Construction Co., Ltd. | A method for predicting concrete durability based on data mining and artificial intelligence algorithm |
-
2021
- 2021-03-14 CN CN202110274414.3A patent/CN113052214B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414747A (en) * | 2019-08-08 | 2019-11-05 | 东北大学秦皇岛分校 | A kind of space-time shot and long term urban human method for predicting based on deep learning |
CN110619430A (en) * | 2019-09-03 | 2019-12-27 | 大连理工大学 | Space-time attention mechanism method for traffic prediction |
CN110610232A (en) * | 2019-09-11 | 2019-12-24 | 南通大学 | Long-term and short-term traffic flow prediction model construction method based on deep learning |
CN111275169A (en) * | 2020-01-17 | 2020-06-12 | 北京石油化工学院 | Method for predicting building thermal load in short time |
CN111309577A (en) * | 2020-02-19 | 2020-06-19 | 北京工业大学 | Spark-oriented batch processing application execution time prediction model construction method |
AU2020101854A4 (en) * | 2020-08-17 | 2020-09-24 | China Communications Construction Co., Ltd. | A method for predicting concrete durability based on data mining and artificial intelligence algorithm |
Non-Patent Citations (2)
Title |
---|
基于聚类分析与随机森林的短期负荷滚动预测;荀港益;;智能城市;20180514(第09期);全文 * |
并行多模型融合的混合神经网络超短期负荷预测;庄家懿;杨国华;郑豪丰;王煜东;胡瑞琨;丁旭;;电力建设;20201001(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113052214A (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113052214B (en) | Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network | |
CN113053115B (en) | Traffic prediction method based on multi-scale graph convolution network model | |
CN109063911B (en) | Load aggregation grouping prediction method based on gated cycle unit network | |
Fan et al. | Development of prediction models for next-day building energy consumption and peak power demand using data mining techniques | |
Alencar et al. | Hybrid approach combining SARIMA and neural networks for multi-step ahead wind speed forecasting in Brazil | |
CN111178616B (en) | Wind speed prediction method based on negative correlation learning and regularization extreme learning machine integration | |
CN113554466A (en) | Short-term power consumption prediction model construction method, prediction method and device | |
CN111027772A (en) | Multi-factor short-term load prediction method based on PCA-DBILSTM | |
Tang et al. | An ensemble deep learning model for short-term load forecasting based on ARIMA and LSTM | |
CN111985719B (en) | Power load prediction method based on improved long-term and short-term memory network | |
CN111242351A (en) | Tropical cyclone track prediction method based on self-encoder and GRU neural network | |
CN114169434A (en) | Load prediction method | |
CN114492922A (en) | Medium-and-long-term power generation capacity prediction method | |
CN106526710A (en) | Haze prediction method and device | |
Zhao et al. | Short-term microgrid load probability density forecasting method based on k-means-deep learning quantile regression | |
CN114595861A (en) | MSTL (modeling, transformation, simulation and maintenance) and LSTM (least Square TM) model-based medium-and-long-term power load prediction method | |
CN113947182B (en) | Traffic flow prediction model construction method based on dual-stage stacked graph convolution network | |
CN115238854A (en) | Short-term load prediction method based on TCN-LSTM-AM | |
CN113762591B (en) | Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning | |
Zuo | Integrated forecasting models based on LSTM and TCN for short-term electricity load forecasting | |
CN114611757A (en) | Electric power system short-term load prediction method based on genetic algorithm and improved depth residual error network | |
CN117909888A (en) | Intelligent artificial intelligence climate prediction method | |
Zhichao et al. | Short-term load forecasting of multi-layer LSTM neural network considering temperature fuzzification | |
CN114254828B (en) | Power load prediction method based on mixed convolution feature extractor and GRU | |
CN110349050B (en) | Intelligent electricity stealing criterion method and device based on power grid parameter key feature extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |