CN113052214A - Heat exchange station ultra-short term heat load prediction method based on long and short term time series network - Google Patents

Heat exchange station ultra-short term heat load prediction method based on long and short term time series network Download PDF

Info

Publication number
CN113052214A
CN113052214A CN202110274414.3A CN202110274414A CN113052214A CN 113052214 A CN113052214 A CN 113052214A CN 202110274414 A CN202110274414 A CN 202110274414A CN 113052214 A CN113052214 A CN 113052214A
Authority
CN
China
Prior art keywords
data
model
short term
layer
long
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110274414.3A
Other languages
Chinese (zh)
Other versions
CN113052214B (en
Inventor
刘旭东
李硕
范青武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110274414.3A priority Critical patent/CN113052214B/en
Publication of CN113052214A publication Critical patent/CN113052214A/en
Application granted granted Critical
Publication of CN113052214B publication Critical patent/CN113052214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Strategic Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for predicting the ultra-short-term heat load of a heat exchange station. Firstly, screening and dimension reduction are carried out on the characteristics by using a random forest algorithm; then, carrying out standardization processing on the data; and then, a thermal load prediction model based on a long-term and short-term time sequence network is established, the model captures long-term and short-term characteristic information through a convolution layer and a cycle layer, then the concept of a cycle jump layer is introduced, longer-term characteristic information is captured, and meanwhile, linear processing capacity is added to the model by utilizing an autoregressive algorithm, so that the robustness of the model is enhanced. The method solves the problem of information loss when the neural network processes long sequence data by utilizing the periodic characteristics of the time-by-time load, thereby improving the performance of model prediction.

Description

Heat exchange station ultra-short term heat load prediction method based on long and short term time series network
Technical Field
The invention relates to the technical field of centralized heating, in particular to a method for predicting ultra-short-term heat load of a heat exchange station. The invention relates to a specific application of a data-driven method in the field of heat load prediction in a centralized heating process.
Background
With the continuous development of the economic society of China and the continuous improvement of the urbanization level, the centralized heat supply is gradually covered in cities and rural areas in the northern China. According to the disclosure of the national statistical bureau, the central heating area of cities in China reaches 87.80 hundred million square meters by 2018, and is increased by 5.67 percent compared with the area at the end of 2017. Fossil fuel consumed by centralized heat supply can cause serious environmental pollution and haze, and in order to realize energy conservation and environmental protection and avoid uneven heat supply, prediction of heat load becomes an important research problem. And a heat supply company is researched and researched to know that the heat supply area of the heat supply company is about 350 ten thousand square meters, and if the temperature is reduced by 0.5 ℃ in the heat supply process, nearly ten thousand yuan can be saved. Therefore, from the aspects of energy conservation, environmental protection and economic benefit, the heat load prediction has very important practical significance. The central heating system is a nonlinear large-scale system, which comprises a plurality of valves and pumps, and an accurate mathematical model is difficult to establish, so that the data-driven method is more suitable for the field of heat load prediction.
The method is mainly designed for the ultra-short-term heat load forecasting task of the heat exchange station. The heat exchange station is directly connected with a heat user, distributes and distributes heat, and the heat supply company directly regulates and controls the heat exchange station. In real life, the heat exchange station is close to the district heat user, and the lag period of heat supply is close to 1 hour. Therefore, the heat exchange station and the ultra-short period respectively serve as the object of research and have better practical significance in time dimension.
The traditional heat load prediction method mainly comprises gray prediction, time series prediction, regression and other methods, and with the continuous development of intelligent algorithms, a plurality of algorithms of machine learning and neural networks are applied to the field, such as: support Vector Regression (SVR), Recurrent Neural Network (RNN), Long-Short Term Memory (LSTM), and the like. However, the above methods are based on the time sequence characteristic of the thermal load, when the input sequence is long, the gradient disappears, which easily causes the loss of information, so that the correlation of information in a long period is lost, and the prediction accuracy needs to be improved.
Disclosure of Invention
In order to solve the problem that the Long-term information is easy to lose when the neural Network processes the Long-sequence information, the invention provides a Long Short-term Time-series Network (LSTNet) model to deal with the problem. Thermal loading is a typical time series problem, and time-wise thermal loading is periodic. The long-short time sequence network model provided by the invention utilizes the characteristic and introduces the idea of cyclic jump, thereby effectively solving the problem of information loss. Firstly, the model utilizes a Random Forest (RF) algorithm to screen and reduce dimensions of features; and then, a thermal load prediction model based on a long-term and short-term time sequence network is established, long-term and short-term characteristic information is captured through the convolution layer and the cycle layer, then the concept of a cycle jump layer is introduced, longer-term characteristic information is captured, and meanwhile, linear processing capacity is added to the model by utilizing an autoregressive algorithm, so that the robustness of the model is enhanced.
The invention adopts the following technical scheme and implementation steps:
s1, selecting meteorological data and heating data in a certain time period, and constructing a data set as an input variable Xn
S2, preprocessing the data, including identifying and correcting missing values and outliers, and standardizing the data;
s3, screening the input variables by using an RF method, and performing dimensionality reduction operation on the data set to obtain XmAnd the data set is divided into 8: 2, dividing the ratio into a training set and a testing set;
s4 inputs the training set into the LSTNet model item by item, the weights and biases of the training model:
s401, firstly, capturing short-term local characteristic information by using a convolutional layer;
s402, utilizing the circulation layer to capture the long-term macro information, and outputting ht R(ii) a Simultaneous cycle of skip floor benefit
The periodic characteristics of the sequence are used to capture longer-term information, the output is ht S
S403, connecting the outputs of the cycle layer and the cycle jump layer in a mode of full connection layer to obtain the output yt D
S404 thenLinear components are added for prediction by combining the output of the AR process, and meanwhile, the model can capture the scale change of the input, the robustness of the model is enhanced, and the output y is obtainedt A
The output module of S405 integrates the output of the neural network part and the output of the AR model to obtain a final prediction model.
S5, inputting the test set into the well-trained LSTNet model one by one to obtain a predicted value
Figure BDA0002975389790000021
Advantageous effects
Compared with the prior art, the method fully considers the periodic characteristic of the ultra-short-term thermal load, and makes up the problem of information loss of the conventional neural network caused by gradient descent by introducing the concept of the cycle jump layer. Different from the traditional neural network algorithm, the method fully considers the periodic characteristic of the time-by-time heat load, the characteristic is more representative, and the prediction task of the ultra-short-term heat load can be better completed.
Drawings
FIG. 1 is a diagram of a model architecture according to the present invention;
FIG. 2 is a graph of simulated heat load data presentation of the present invention;
FIG. 3 is a graph showing the predicted results of the LSTNet model of the present invention;
Detailed Description
The technical features and advantages of the present invention will become more apparent from the following detailed description of the embodiments of the present invention when taken in conjunction with the accompanying drawings.
S1, selecting as many related characteristic variable data as possible, wherein the related characteristic variable data may include meteorological data, operation condition data, heat load data and the like, so as to construct a heat load data set to obtain Xn={x1,x2,…,xnN is the number of characteristic variables;
s2, after the data set is constructed, preprocessing the data:
s201 compensates for the missing value, i.e. the value of 0 or null data, and calculates using the following formula:
xi=0.4xi-1+0.4xi+1+0.2xi+2 (1)
in the formula xiIs the current miss value, xi-1、xi+1And xi+2The values of the previous moment, the next moment and the next two moments are respectively;
s202 treats an outlier, that is, a value exceeding 3 times or more the predetermined range, as a missing value;
s203, standardizing each dimension input variable, wherein the adopted calculation formula is as follows:
Figure BDA0002975389790000031
in the formula yiIs a normalized value; x is the number ofiIs the original value;
Figure BDA0002975389790000032
and s represent the mean and variance of the raw data, respectively. The normalized data mean is 0, variance is 1, and there is no dimension.
S3, screening and dimension reduction are carried out on the feature variables by using an RF algorithm, the idea of evaluating feature importance by using a random forest is simple, and the method mainly comprises the steps of determining how much each feature contributes to each tree in the random forest, then averaging, and finally comparing the contribution sizes of different features. The importance of a certain feature x is denoted as IMP, and the calculation method is as follows:
s301, for each decision tree in the random forest, calculates its Out-Of-Bag data error, denoted as OOB error1, using the corresponding Out-Of-Bag data (Out Of Bag, OOB), and the calculation formula is as follows:
Figure BDA0002975389790000041
and taking the out-of-bag data as input, bringing the out-of-bag data into a random forest classifier, performing classification comparison on the O pieces of data by using the classifier, and counting the number of classification errors to be set as X.
S302, randomly adding noise interference to the characteristic x of all samples of the data outside the bag, and calculating the error of the data outside the bag again and recording the error as OOBERR 2;
s303, assuming there are N trees in the random forest, the importance IMP of the feature x is shown in formula 1:
Figure BDA0002975389790000042
after noise is randomly added to a certain feature, the accuracy rate outside the bag is greatly reduced, which indicates that the feature has a great influence on the classification result of the sample, that is, the feature has a high importance degree.
The invention utilizes random forest to sort the importance of the characteristic variables in a descending order, then determines the deletion ratio, and eliminates the unimportant indexes of the corresponding ratio from the current characteristic variables, thereby obtaining a new characteristic set, wherein the characteristic of the new characteristic set is Xm={x1,x2,…,xm}. Wherein m is<n, the deleting proportion is determined according to the number of the characteristic variables in the original data set. After dimensionality reduction of the dataset, the dataset is scaled by 8: the scale of 2 is divided into a training set and a test set.
S4, inputting training set data into the LSTNet model item by item according to a time sequence, wherein the weights and the bias of the training model are shown in the overall structure of the LSTNet model in FIG. 1:
s401, the first module of the network is a convolution layer, and the function of the convolution layer is to extract features and capture local short-term feature information. The convolutional layer module consists of a number of filters, where the width is ω, the height is m, and m is the same as the number of features. The output of the ith filter is then:
hi=ReLU(Wi*X+bi) (5)
in which h is outputiAs a vector, ReLU is an activation function, and ReLU (x) max (0, x). Is a convolution operation, WiAnd biRespectively weight matrix and bias.
S402 the convolutional layer module outputs the loop layer and the loop-jump layer simultaneously input to the second module. What is used by the loop layer is a gated loop Unit (GRU), in which ReLU is used as an activation function for implicit updates. Then the hidden state output h of the cell at time tt RComprises the following steps:
Figure BDA0002975389790000051
wherein z istAnd rtThe outputs of the update gate and reset gate in the GRU neuron respectively,
Figure BDA0002975389790000052
output for an intermediate state; σ is sigmoid activation function, xtAn input at this layer at time t, which is an elemental product; w, U and b are the weight matrix and offset, respectively, for each gate cell. The output of this layer is the hidden state at each time step.
The GRU network can capture long-term history information, but because the gradient disappears, all the previous information cannot be saved, so that the correlation of the longer-term information is lost. In the LSTNet model, the problem is solved by a jumping idea, which is based on periodic data, by the hyper-parameter of period p, obtaining very far time information. When the time t is predicted, the time data information of the previous period, the previous period and the earlier period can be predicted. Since this type of dependency is difficult to capture by the cyclic unit due to the long time of one cycle, introducing a cyclic network structure with hopping connections can extend the time span of the information flow to obtain longer-term data information. Its output h at time tt SComprises the following steps:
Figure BDA0002975389790000053
the input to this layer is the same as the recycle layer and is the output of the convolutional layer. Where p is the number of skipped hidden units, i.e. the period. The general period is easily determined, and according to engineering experience or data trend, if the data is non-periodic or the periodicity is dynamically changed, attention mechanism method can be cited to dynamically update the period p.
S403, connecting the two layers, and combining the outputs of the two layers by adopting a full-connection layer mode by the model. The output of this layer at time t is:
Figure BDA0002975389790000054
wherein WRAnd WSWeights assigned to the loop layer and the loop jump layer, respectively, and b is an offset value.
S404 in the actual data set, the input scale changes non-periodically, but the neural network is not sensitive to the scale changes of the input and output, so the prediction accuracy of the neural network model is significantly reduced by this problem. Therefore, in the model, in order to solve the deficiency, a linear part is added in the model, and a classical Autoregressive (AR) model is adopted to enhance the robustness of the model. The output y of the AR model at time tt AComprises the following steps:
Figure BDA0002975389790000061
wherein q isAIs the input window size on the input matrix.
The S405 output module integrates the output of the neural network part and the output of the AR model to obtain the final output of the LSTNet model
Figure BDA0002975389790000062
Comprises the following steps:
Figure BDA0002975389790000063
wherein
Figure BDA0002975389790000064
Is the final predicted value of the model at time t.
S406, in the model training process, a Mean Square Error (MSE) function is used as a loss function, and the formula is as follows:
Figure BDA0002975389790000065
where n is the number of valid data,
Figure BDA0002975389790000066
and yiRespectively predicted values and actual values tested.
S5, inputting the test set into the well-trained LSTNet model one by one to obtain a predicted value
Figure BDA0002975389790000067
To verify the effectiveness of the method, we used normal data from a heating season for verification. The data is obtained by simulating the 120-day heating process of the heat exchange station in one district of Zheng State in Henan by EnergyPlus software, and a data diagram is shown in FIG. 2. The comparison experiment is performed by using methods such as AR, Integrated Moving Average Autoregressive (ARIMA), MLR, SVR, GRU and the like, the experimental result is shown in FIG. 3, and the evaluation index result of each model is shown in Table 1.
TABLE 1 comparison of evaluation indexes for models of thermal load prediction
Model (model) RMSE(×103) MAE(×103) R-Squared
AR 40.815 27.213 76.724%
ARIMA 33.892 19.028 83.951%
MLR 31.631 20.857 86.020%
SVR 29.220 18.662 88.070%
GRU 24.994 17.249 91.268%
LSTNet 15.833 12.341 96.501%
From the above experimental results, it can be seen that the LSTNet model utilized herein predicts performance better than other models for time-wise thermal load prediction, closer to 1 on the R-square index than other models. Compared with a GRU model, the RMSE of the LSTNet model in the model is reduced by 36.7%, the MAE is reduced by 28.5%, and the model precision is obviously improved.

Claims (4)

1. A method for predicting the ultra-short term load of a heat exchange station based on a long-short term time sequence network is characterized by comprising the following steps:
s1: selecting meteorological data and heat supply data in a certain time period, and constructing a data set as an input variable Xn
S2: preprocessing data, wherein the preprocessing comprises identification and correction of missing values and outliers, and standardizing the data;
s3: screening input variables by adopting an RF method, and performing dimensionality reduction operation on a data set to obtain XmAnd the data set is divided into 8: 2, dividing the ratio into a training set and a testing set;
s4: inputting the training set into an LSTNet model one by one, and training the weight and the bias of the model to obtain a trained network model;
s5: inputting the test set into the well-trained LSTNet model one by one to obtain a predicted value
Figure FDA0002975389780000014
2. The method for predicting the ultra-short term load of the heat exchange station based on the long-short term time series network as claimed in claim 1, wherein the step S2 is to pre-process the data and comprises the following steps:
s201: for the deficiency value, the following formula can be used for calculation:
xi=0.4xi-1+0.4xi+1+0.2xi+2 (1)
in the formula xiIs the current miss value, xi-1、xi+1And xi+2The values of the previous moment, the next moment and the next two moments are respectively;
s202: treating an outlier, that is, a value exceeding a predetermined range by 3 times or more, as a missing value;
s203: each dimension data is normalized by the formula:
Figure FDA0002975389780000011
in the formula yiIs a normalized value; x is the number ofiIs the original value;
Figure FDA0002975389780000012
and s represent the mean and variance of the raw data, respectively; the normalized data mean is 0, variance is 1, and there is no dimension.
3. The method for predicting the ultra-short term load of the heat exchange station based on the long-short term time series network as claimed in claim 1, wherein the method for reducing the dimension of step S3 comprises:
s301: calculating the error of the data outside the bag of each decision tree in the random forest;
the corresponding Out-Of-Bag data (Out Of Bag, OOB) is used to calculate its Out-Of-Bag data error, denoted as OOB error1, which is calculated as follows:
Figure FDA0002975389780000013
taking the data outside the bag as input, bringing the data outside the bag into a random forest classifier, performing classification comparison on the O pieces of data by using the classifier, and counting the number of classification errors to be set as X;
s302: adding noise interference to the characteristic x, and calculating the error of the data outside the bag of each decision tree in the random forest again;
s303: calculating the importance IMP of each feature, wherein the calculation formula is as follows:
Figure FDA0002975389780000021
in the formula, OOB error1 and OOB error2 are the error outside the bag before and after the noise is added respectively; and N is the total number of decision trees in the random forest.
4. The method for predicting the ultra-short term load of the heat exchange station based on the long-short term time series network according to claim 1, wherein the LSTNet model provided in the step S4 comprises the following specific steps:
s401: the first module of the network is a convolutional layer, which consists of a plurality of filters, and the output formula of the ith filter is:
hi=ReLU(Wi*X+bi) (5)
in which h is outputiAs a vector, ReLU is an activation function, and ReLU (x) max (0, x). For convolution operations, WiAnd biRespectively, a weight matrix and an offset;
s402: the second module is a circulation layer and a circulation jump layer, which are used for obtaining long-term and longer-term characteristic information and outputting the hidden state of the unit at the time t
Figure FDA0002975389780000025
And
Figure FDA0002975389780000026
respectively as follows:
Figure FDA0002975389780000022
Figure FDA0002975389780000023
wherein z istAnd rtThe outputs of the update gate and reset gate in the GRU neuron respectively,
Figure FDA0002975389780000024
output for an intermediate state; σ is sigmoid activation function, xtAn input at this layer at time t, which is an elemental product; p is the number of skipped hidden units, i.e. the period; w, U and b are the weight matrix and offset of each gate cell, respectively;
s403: in order to connect the upper two layers, the model combines the outputs of the two layers in a full connection layer mode; the output formula of the layer at the time t is as follows:
Figure FDA0002975389780000031
wherein WRAnd WSWeights respectively allocated to the loop layer and the loop jump layer, and b is an offset value;
s404: to capture the change of the input scale, an AR process is added to the model, the output of which at time t
Figure FDA0002975389780000032
Comprises the following steps:
Figure FDA0002975389780000033
wherein q isAIs the input window size on the input matrix;
s405: integrating the output of the neural network part and the output of the AR model to obtain the final predicted output of the model
Figure FDA0002975389780000034
Comprises the following steps:
Figure FDA0002975389780000035
wherein
Figure FDA0002975389780000036
Is the final predicted value of the model at the time t;
s406: in the model training process, a Mean Square Error (MSE) function is used as a loss function, and the formula is as follows:
Figure FDA0002975389780000037
where n is the number of valid data,
Figure FDA0002975389780000038
and yiRespectively predicted values and actual values tested.
CN202110274414.3A 2021-03-14 2021-03-14 Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network Active CN113052214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110274414.3A CN113052214B (en) 2021-03-14 2021-03-14 Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110274414.3A CN113052214B (en) 2021-03-14 2021-03-14 Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network

Publications (2)

Publication Number Publication Date
CN113052214A true CN113052214A (en) 2021-06-29
CN113052214B CN113052214B (en) 2024-05-28

Family

ID=76512106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110274414.3A Active CN113052214B (en) 2021-03-14 2021-03-14 Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network

Country Status (1)

Country Link
CN (1) CN113052214B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256036A (en) * 2021-07-13 2021-08-13 国网浙江省电力有限公司 Power supply cost analysis and prediction method based on Prophet-LSTNet combined model
CN113821344A (en) * 2021-09-18 2021-12-21 中山大学 Cluster load prediction method and system based on machine learning
CN114912169A (en) * 2022-04-24 2022-08-16 浙江英集动力科技有限公司 Industrial building heat supply autonomous optimization regulation and control method based on multi-source information fusion
CN115860270A (en) * 2023-02-21 2023-03-28 保定博堃元信息科技有限公司 Network supply load prediction system and method based on LSTM neural network
CN118074112A (en) * 2024-02-21 2024-05-24 北京智芯微电子科技有限公司 Photovoltaic power prediction method based on similar day and long-short period time sequence network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414747A (en) * 2019-08-08 2019-11-05 东北大学秦皇岛分校 A kind of space-time shot and long term urban human method for predicting based on deep learning
CN110610232A (en) * 2019-09-11 2019-12-24 南通大学 Long-term and short-term traffic flow prediction model construction method based on deep learning
CN110619430A (en) * 2019-09-03 2019-12-27 大连理工大学 Space-time attention mechanism method for traffic prediction
CN111275169A (en) * 2020-01-17 2020-06-12 北京石油化工学院 Method for predicting building thermal load in short time
CN111309577A (en) * 2020-02-19 2020-06-19 北京工业大学 Spark-oriented batch processing application execution time prediction model construction method
AU2020101854A4 (en) * 2020-08-17 2020-09-24 China Communications Construction Co., Ltd. A method for predicting concrete durability based on data mining and artificial intelligence algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414747A (en) * 2019-08-08 2019-11-05 东北大学秦皇岛分校 A kind of space-time shot and long term urban human method for predicting based on deep learning
CN110619430A (en) * 2019-09-03 2019-12-27 大连理工大学 Space-time attention mechanism method for traffic prediction
CN110610232A (en) * 2019-09-11 2019-12-24 南通大学 Long-term and short-term traffic flow prediction model construction method based on deep learning
CN111275169A (en) * 2020-01-17 2020-06-12 北京石油化工学院 Method for predicting building thermal load in short time
CN111309577A (en) * 2020-02-19 2020-06-19 北京工业大学 Spark-oriented batch processing application execution time prediction model construction method
AU2020101854A4 (en) * 2020-08-17 2020-09-24 China Communications Construction Co., Ltd. A method for predicting concrete durability based on data mining and artificial intelligence algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
庄家懿;杨国华;郑豪丰;王煜东;胡瑞琨;丁旭;: "并行多模型融合的混合神经网络超短期负荷预测", 电力建设, no. 10, 1 October 2020 (2020-10-01) *
荀港益;: "基于聚类分析与随机森林的短期负荷滚动预测", 智能城市, no. 09, 14 May 2018 (2018-05-14) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256036A (en) * 2021-07-13 2021-08-13 国网浙江省电力有限公司 Power supply cost analysis and prediction method based on Prophet-LSTNet combined model
CN113256036B (en) * 2021-07-13 2021-10-12 国网浙江省电力有限公司 Power supply cost analysis and prediction method based on Prophet-LSTNet combined model
CN113821344A (en) * 2021-09-18 2021-12-21 中山大学 Cluster load prediction method and system based on machine learning
CN113821344B (en) * 2021-09-18 2024-04-05 中山大学 Cluster load prediction method and system based on machine learning
CN114912169A (en) * 2022-04-24 2022-08-16 浙江英集动力科技有限公司 Industrial building heat supply autonomous optimization regulation and control method based on multi-source information fusion
CN114912169B (en) * 2022-04-24 2024-05-31 浙江英集动力科技有限公司 Industrial building heat supply autonomous optimization regulation and control method based on multisource information fusion
CN115860270A (en) * 2023-02-21 2023-03-28 保定博堃元信息科技有限公司 Network supply load prediction system and method based on LSTM neural network
CN118074112A (en) * 2024-02-21 2024-05-24 北京智芯微电子科技有限公司 Photovoltaic power prediction method based on similar day and long-short period time sequence network

Also Published As

Publication number Publication date
CN113052214B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN113052214B (en) Heat exchange station ultra-short-term heat load prediction method based on long-short-term time sequence network
CN113053115B (en) Traffic prediction method based on multi-scale graph convolution network model
CN111899510B (en) Intelligent traffic system flow short-term prediction method and system based on divergent convolution and GAT
Zheng et al. An accurate GRU-based power time-series prediction approach with selective state updating and stochastic optimization
CN110909926A (en) TCN-LSTM-based solar photovoltaic power generation prediction method
CN112116147A (en) River water temperature prediction method based on LSTM deep learning
CN112990556A (en) User power consumption prediction method based on Prophet-LSTM model
CN109754113A (en) Load forecasting method based on dynamic time warping Yu length time memory
CN113554466B (en) Short-term electricity consumption prediction model construction method, prediction method and device
CN111178616B (en) Wind speed prediction method based on negative correlation learning and regularization extreme learning machine integration
CN111985719B (en) Power load prediction method based on improved long-term and short-term memory network
CN115034129B (en) NOx emission concentration soft measurement method for thermal power plant denitration device
CN115481788B (en) Phase change energy storage system load prediction method and system
CN114595861A (en) MSTL (modeling, transformation, simulation and maintenance) and LSTM (least Square TM) model-based medium-and-long-term power load prediction method
CN114065653A (en) Construction method of power load prediction model and power load prediction method
CN115186803A (en) Data center computing power load demand combination prediction method and system considering PUE
CN116303786A (en) Block chain financial big data management system based on multidimensional data fusion algorithm
CN113762591B (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning
CN114819395A (en) Industry medium and long term load prediction method based on long and short term memory neural network and support vector regression combination model
CN114581141A (en) Short-term load prediction method based on feature selection and LSSVR
CN114357870A (en) Metering equipment operation performance prediction analysis method based on local weighted partial least squares
CN114254828B (en) Power load prediction method based on mixed convolution feature extractor and GRU
CN115600498A (en) Wind speed forecast correction method based on artificial neural network
Li et al. Short-term Load Forecasting of Long-short Term Memory Neural Network Based on Genetic Algorithm
He et al. Residential Load Forecasting Based on CNN-LSTM and Non-uniform Quantization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant