CN116090602A - Power load prediction method and system - Google Patents
Power load prediction method and system Download PDFInfo
- Publication number
- CN116090602A CN116090602A CN202211527627.3A CN202211527627A CN116090602A CN 116090602 A CN116090602 A CN 116090602A CN 202211527627 A CN202211527627 A CN 202211527627A CN 116090602 A CN116090602 A CN 116090602A
- Authority
- CN
- China
- Prior art keywords
- data
- load
- information
- neural network
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000013528 artificial neural network Methods 0.000 claims abstract description 53
- 238000012795 verification Methods 0.000 claims abstract description 53
- 238000012549 training Methods 0.000 claims abstract description 40
- 125000004122 cyclic group Chemical group 0.000 claims abstract description 36
- 230000005611 electricity Effects 0.000 claims abstract description 36
- 238000012360 testing method Methods 0.000 claims abstract description 30
- 238000004364 calculation method Methods 0.000 claims abstract description 26
- 238000005457 optimization Methods 0.000 claims abstract description 24
- 230000007246 mechanism Effects 0.000 claims abstract description 23
- 238000003062 neural network model Methods 0.000 claims abstract description 23
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 12
- 239000010410 layer Substances 0.000 claims description 88
- 230000006870 function Effects 0.000 claims description 28
- 238000010606 normalization Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 9
- 230000000306 recurrent effect Effects 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 239000002356 single layer Substances 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000010200 validation analysis Methods 0.000 claims description 3
- 230000015654 memory Effects 0.000 description 9
- 210000004027 cell Anatomy 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000010248 power generation Methods 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241000269435 Rana <genus> Species 0.000 description 1
- 238000012300 Sequence Analysis Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008033 biological extinction Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000003319 supportive effect Effects 0.000 description 1
- 238000012731 temporal analysis Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000000700 time series analysis Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/003—Load forecast, e.g. methods or systems for forecasting future load demand
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Economics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Operations Research (AREA)
- Primary Health Care (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Biomedical Technology (AREA)
- Entrepreneurship & Innovation (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Power Engineering (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a power load prediction method, which comprises the following steps: acquiring data required for power consumption load prediction; carrying out digital coding and exception processing on date, holiday and meteorological data of the corresponding time of the real historical load data; based on mutual information calculation, obtaining correlations between a plurality of non-load data and real historical load data, and selecting a combination with strong correlation to form an original data set; dividing into a training set, a verification set and a test set; extracting and outputting real historical load data information; performing weight calculation on the extracted information based on an attention mechanism to obtain an intermediate variable; converting the information of the intermediate variable into electricity load information to be predicted; inputting the data of the selected normalized training set and the data of the verification set into a cyclic neural network for parameter optimization and updating based on an optimization method Adam and a loss function MSE; and (3) saving a cyclic neural network model conforming to the verification loss rate, and inputting the data in the test set into the cyclic neural network model to predict the electricity load. Corresponding systems are also disclosed.
Description
Technical Field
The invention relates to the technical field of electricity load prediction, in particular to an electricity load prediction method and an electricity load prediction system, and especially relates to an electricity load prediction method and an electricity load prediction system based on a multi-layer extended cyclic neural network and an attention mechanism.
Background
The power industry is a supportive industry of national economic development. In recent years, the structure of the power generation industry is gradually regulated and perfected, and not only the specific gravity of various new energy sources in the energy consumption structure is gradually increased, but also the power generation center of each large power plant is gradually deviated from the supply side to the demand side. Load prediction is classified into long-term, medium-term, short-term and ultra-short-term prediction according to a predicted time span. The short-term load prediction is carried out by taking the day and the week as units, and the accurate prediction has remarkable significance on the adjustment output of a power generation enterprise, the scheduling plan of a power grid and the electricity optimization of users. Load data has time sequence and nonlinearity, and studies of power load prediction by relevant scholars are roughly divided into the following methods based on time sequence analysis, machine learning and neural networks.
Methods based on time series analysis mainly include an exponential smoothing method, an autoregressive moving average method (ARIMA), and the like. The method mainly applies mathematical statistics and random process theory to analyze the mathematical statistics rule followed by the random data sequence so as to predict the future development trend of the system. However, the method has relatively good prediction precision only for the load sequence with relatively small fluctuation, and is only suitable for predicting a single load variable, and cannot be well applied when other relevant components such as weather, date and the like exist. In order to focus on the impact of different factors on load data, some methods based on machine learning, such as Support Vector Machines (SVMs), random forests, XGBoost, etc., are also widely used to solve the problem of multivariate load prediction. The models not only can automatically learn more advanced features in the sequence from the data, but also can learn nonlinear relations between other factors and the load data. However, the methods excessively depend on sample selection of similar data in the construction process of the model, so that the construction and updating of the model are not flexible enough, and the accuracy of load data prediction cannot meet the requirements of a power system.
Because the neural network has the associative memory function of the human brain and the capability of parallel distribution information, self-learning and random approximation continuous function, various changes of the power load can be captured, and particularly, the nonlinear relation between load data and other data is easy to process, so that the neural network has good practicability. The simplest neural network approach is a multi-layer perceptron (MLP), which consists of fully connected neurons. The load and related data are transmitted forward in the multi-layer perceptron, and parameters in the load and related data are optimized through error back transmission (BP), so that the prediction effect that the output result gradually approaches to the future real load is achieved. By means of a clustering analysis method for classifying different loads by a big data platform, zheng et al propose a multiple BP neural network, so that the problems of time consumption and over fitting under massive data are solved, and the prediction accuracy is improved. Qiao et al propose a method (SAPOS-BP) for optimizing weights in a neural network using a simulated annealing algorithm concept to improve the particle swarm optimization algorithm in combination with the BP algorithm, for load prediction. The method greatly improves the generalization capability and the self-learning capability of the network. Combining wavelet decomposition techniques with neural networks, rana and koprinska propose a time-sequential prediction method of advanced neural networks for short-term prediction of load.
The cyclic neural network (RNN) has good time sequence information extraction and nonlinear learning capability, and can learn the influence of other factors (such as weather, date and the like) on load data at the same time, so that the method is a common method for processing time sequence problems. Long-term memory neural networks (LSTM) overcome the drawbacks of gradient extinction and gradient explosion in traditional RNNs, making them more favored by load-predictive researchers. LSTM was predicted 24 hours and 1 hour ahead of load on the two load datasets, respectively, and exhibited good predictive power and applicability. Gating recursion units (GURs) are variants of LSTM and are simpler in architecture and are also commonly used timing processing networks. Under the same environment, mahjoub et al conduct load prediction comparison on an LSTM network, a GUR network and a Drop-GUR network with a fitting prevention function, and experiments show that the LSTM has an optimal prediction effect. The two-way long short-term memory network (BiLSTM) is an extension of the traditional one-way LSTM network, and has the characteristic of learning from the front end and the rear end of the sequence at the same time. Siami et al demonstrate the usability of this model in the field of load prediction and have predictive capabilities that are superior to unidirectional LSTM. In recent years, a method of combining a convolutional neural network and a cyclic neural network gradually enters the field of view, and the idea is mainly to extract spatial information in a sequence by using a CNN network and extract time information of the sequence by using an LSTM (least squares), so that the length of learning information can be expanded, and the prediction capability higher than that of a single network can be obtained. The method of combining deep residual neural network CNN and Stacked LSTM is used for building electrical load prediction. Wang et al combine CNN and BiLSTM for load prediction of power. Both of these hybrid prediction methods exhibit superior performance to single LSTM networks.
Attention mechanisms have become a hotspot in the fields of speech recognition, image recognition, machine translation and the like in recent years, and the capability of capturing key information makes the mechanism a research direction for improving load prediction accuracy for some students. Jung et al constructed an attention-based multi-layer GRU model for one hour in advance load prediction for buildings, and the results showed that this method had a better prediction accuracy improvement than the basic GRU method. Wu et al combine the attention mechanism with the CNN model for extracting spatial features of the data and use the temporal information of LSTM and BiLSTM extracted data to effectively predict the power load in the integrated energy system. Sequence-to-sequence (Seq 2 Seq) architecture, which originates from the field of Natural Language Processing (NLP), has been applied in recent work for short-term load prediction. It is divided into two parts, an encoder and a decoder, the encoder extracts information from the input sequence and compresses the information to a fixed intermediate variable, and the decoder is responsible for decoding the intermediate variable rich in historical load information into a predicted value expected. Its input sequence length and output sequence length are both variable and may be different. Du et al devised an end-to-end architecture based on an attention mechanism and sequence pair sequence model to specifically address the problem of multivariate time series prediction and validated on five multivariate time series datasets. LJUBISA SEHOVAC et al, respectively, describes the load prediction performance under the Seq2Seq framework for different RNN structures (cells) (vanella, LSTM, GRU) and different attention mechanisms (Bahdanau attention (BA), luong Attention (LA)), and demonstrates that the BA-carrying Seq2Seq has the best prediction effect as a whole.
The memory gate can be updated in LSTM network to keep the information of the previous sequence of the current cell, but the memory is a short-term memory. The memory in the neurons is gradually updated as the length of the time series is gradually extended, and when a certain degree is reached, the information of the initial memory is weakened or forgotten. To solve this drawback, a hybrid model that fuses extended RNN networks, attention mechanisms, and LSTM networks is proposed on the basis of the Seq2Seq framework to predict load using long sequences. Wherein the model takes a multi-layer extended RNN structure as an Encoder layer to learn nonlinear and multidimensional dependencies from long-load sequences. The attention mechanism is used to distinguish the importance of the output of the Encoder layer in a weighted manner to reduce the impact of redundant information on the subsequent decoding process. And finally, decoding by using a single-layer LSTM network and outputting a load prediction result by using a full connection layer.
Disclosure of Invention
The invention aims to provide a power consumption load prediction method which predicts short-term loads in the future based on an artificial intelligence technology.
In one aspect, the present invention provides a method for predicting an electrical load, including:
S1, obtaining data required by electricity consumption load prediction, wherein the data comprise real historical load data of a certain area, date of corresponding time of the real historical load data, holidays and meteorological data, and the real historical load data comprise date in an electricity consumption database of an electricity user and daily multipoint electric power data information (power) in a period of time;
s2, carrying out digital coding and exception processing on date, holiday and meteorological data of the corresponding time of the real historical load data; wherein the date feature comprises: year, month, week, day; the holiday features include: holiday (holiday); the characteristics of the meteorological data include: topTem (day highest air temperature), lowTem (day lowest air temperature), avgTem (average air temperature), rain (rainfall) and hum (humidity);
s3, obtaining correlations between a plurality of non-load data and real historical load data based on a mutual information calculation mode, sorting the correlations according to the size, and selecting corresponding non-load data with strong correlations to be combined with the real historical load data to form an original data set;
s4, carrying out data specification and normalization on the original data in the original data set, and dividing the original data set into three parts, namely a training set, a verification set and a test set;
S5, a Seq2Seq model comprising an Encoder layer and a Decoder layer is established, and real historical load data information is extracted through the Encoder layer in the Seq2Seq model;
and S6, carrying out weight calculation on the extracted information output by the Encoder layer based on an attention mechanism, thereby obtaining intermediate variables containing rich historical information.
S7, converting the information of the intermediate variable into the electricity load information to be predicted by using a Decoder layer based on a cyclic neural network of one layer and a linear layer;
s8, based on an optimization method (Adam) and a loss function MSE, inputting selected normalized data of the training set and data of the verification set into the cyclic neural network, and optimizing and updating parameters in the cyclic neural network; the basis of the optimization and updating stop is the model load verification loss rate of the cyclic neural network;
and S9, storing the circulating neural network model conforming to the verification loss rate, and inputting the data in the test set into the circulating neural network model conforming to the verification loss rate, so as to obtain the predicted electricity load.
Preferably, the step S2 of digitally encoding the date, holiday and weather data corresponding to the time of the real historical load data and exception handling includes the steps of:
S21, the digital coding comprises replacing non-digital original data with digital coding, wherein the specific formula is shown in (1):
s22, the exception handling includes:
(1) According to the characteristic that the real historical load data are continuous and the loads in adjacent time do not have larger differences, when the following formula (2) is used for judging the sudden change of the load value, the sudden change of the load value is regarded as an abnormal value:
wherein p is i For a load value at a certain point in time, ζ is a threshold value between 0 and 1; when the above formula (2) is satisfied, deleting the abnormal value for subsequent correction;
(2) When the real historical load data and other related data have the condition of data loss, carrying out correction in advance; if the current missing value is single, correcting by adopting a method of close-proximity averaging, as shown in a formula (3):
p i =(p i+1 +p i-1 )/2 (3);
when load data is continuously lost for a plurality of days at the same time, a value weighted average of the same time of the previous d days of the missing value is used to replace the next value, as shown in formula (4):
wherein w is k Is the weight coefficient of the previous d days.
Preferably, the step S3 of obtaining correlations between the plurality of non-load data and the real historical load data based on the mutual information calculation mode, and the step of sorting the correlations according to the size, and the step of selecting the corresponding non-load data with strong correlations to be combined with the real historical load data to form the original data set includes:
S31: calculating the effect of the characteristics of the different dates and the characteristics of the meteorological data on the change of the electric load, wherein the calculation comprises the calculation of the edge probability densities p (X) and p (Y) of the electric load data X and one of the characteristic variables Y and the joint probability density function p (X, Y) of the electric load data X and the characteristic variables Y in a selected time.
S32: based on the edge probability densities p (X) and p (Y), and the joint probability density function p (X, Y) of the two, the mutual information amount I (X, Y) between the electric load data X and one of the characteristic variables Y is calculated as shown in the following formula (5):
s33: obtaining correlations between a plurality of non-load data and real historical load data through the calculation result of the mutual information, and sequencing the correlations according to the size;
s34, selecting corresponding non-load data with strong correlation as a relevant characteristic variable, and combining the non-load data with the actual historical load data to form an original data set.
Preferably, the step S4 of performing data normalization and data normalization on the original data in the original data set, and dividing the original data set into three parts, namely a training set, a verification set and a test set, includes:
s41: normalizing all digitally encoded data in the raw dataset into [0,1] based on the MinMaxScale method to reduce complexity of network training, the normalized formula being equation (6):
Wherein n is i The i-th feature is indicated as such,n ii representing normalized data, n imax And n imin Respectively the maximum value and the minimum value of the ith feature;
s42: dividing the normalized data into data sets, respectively setting training sets, wherein the quantity ratio of a verification set to a test set is 60%, 20% and 20%;
s43, preprocessing data in the training set, the verification set and the test set to form a sequence label structure conforming to the network training model.
Preferably, the step 5 of establishing a Seq2Seq model including an Encoder layer and a Decoder layer, extracting real historical load data information by the Encoder layer in the Seq2Seq model includes:
s51: establishing a Seq2Seq model, forming a single-layer extended LSTM structure based on an LSTM network, and using data input at the time of t and the hidden state h of a t-d node t-d And cell state c t-d As a cell input for the current LSTM structure; output h of the LSTM structure t And c t As shown in formulas (7) - (12):
f t =σ(W i ·[h t-d ,x t ]+b i ) (7);
u t =σ(W j ·[h t-d ,x t ]+b j ) (8);
v t =tanh(W z ·[h t-d ,x t ]+b z ) (9);
c t =f t *c t-d +u t *v t (10);
y t =σ(W o [h t-d ,x t ]+b o ) (11);
h t =y t *tanh(c t ) (12);
wherein σ is the activation function and tanh (·) is the hyperbolic tangent activation function; w, b are weight parameters and bias of the model, respectively; y is t A tag sequence representing the corresponding data;
s52: expanding a plurality of continuous LSTM structures as an Encoder layer, negative to the true history Extracting the information of the load data, wherein the extracted information comprises the output h of the Encoder layer i And a last hidden state S;
s53, outputting the extracted information through the Encoder layer.
Preferably, the step S6 of calculating weights of the extracted information output by the Encoder layer based on the attention mechanism, thereby obtaining intermediate variables containing rich history information, includes:
s61: inputting the extracted information output in S53 into attention mechanism attention, extracting key information with maximum possibility of the extracted information output by the Encoder layer and compressing into intermediate variable c 0 Specifically, the method is shown in formulas (13) - (15):
W i =h i *S (13);
preferably, the step S7 of converting the information of the intermediate variable into the electricity load information to be predicted using a Decoder layer based on a cyclic neural network of one layer and a linear layer includes:
output c in S61 0 Inputting a first hidden state of an LSTM layer as a Decoder layer into a network, and inputting an original input sequence into the Decoder layer; setting the predicted output sequence of the linear layer as
Preferably, the step S8 is to input the selected normalized data of the training set and the data of the verification set into the recurrent neural network based on an optimization method (Adam) and a loss function MSE, and optimize and update parameters in the recurrent neural network; the basis for optimizing and updating and stopping is the model load verification loss rate of the cyclic neural network, and the method comprises the following steps of:
S81: output sequence of prediction of linear layer in neural networkTag sequence y corresponding to data t The loss rate is calculated by using the mean square error MSE as the loss function, and the formula is as follows:
s82: selecting a learning rate of lr=0.005, judging whether the loss rate in the use of the test set meets a specified constraint tau or not, if the MSE is less than tau, finishing training, and if the MSE is more than tau, performing parameter optimization; at this time, the weight parameter W and the bias b in the network are updated by using the gradient optimization function Adam.
Preferably, the step S9 of storing the recurrent neural network model conforming to the verification loss rate, and inputting the data in the test set into the recurrent neural network model conforming to the verification loss rate, thereby obtaining a predicted electricity load, includes:
s91, confirming that the prediction of the whole model on future data reaches the requirement after the final loss value of the model meets constraint tau in the model training stage;
s92, inputting the test set data into the circulating neural network model conforming to the verification loss rate to predict the electricity load, and further optimizing the circulating neural network model conforming to the verification loss rate through a final result.
A second aspect of the present invention provides an electrical load prediction system comprising:
the data acquisition module is used for acquiring data required by electricity consumption load prediction, wherein the data comprise real historical load data of a certain region, date of corresponding time of the real historical load data, holidays and meteorological data, and the real historical load data comprise date in an electricity consumption database of an electricity consumption user and daily multipoint electric power data information (power) in a period of time;
the data processing module is used for carrying out digital coding and exception processing on the date, holiday and meteorological data of the time corresponding to the real historical load data;
the data set module is used for obtaining correlations between a plurality of non-load data and real historical load data based on a mutual information calculation mode, sequencing the correlations according to the size, and selecting corresponding non-load data with strong correlations to be combined with the real historical load data to form an original data set;
the data set dividing module is used for carrying out data specification and normalization on the original data in the original data set and dividing the original data set into three parts, namely a training set, a verification set and a test set;
the information extraction module is used for establishing a Seq2Seq model and extracting real historical load data information through an Encoder layer in the Seq2Seq model;
And the intermediate variable module is used for carrying out weight calculation on the extracted information output by the Encoder layer based on the attention mechanism so as to obtain intermediate variables containing rich historical information.
The information conversion module is used for converting the information of the intermediate variable into the power load information to be predicted by using a Decoder layer based on a cyclic neural network of one layer and a linear layer;
the neural network training module is used for inputting the selected normalized data of the training set and the data of the verification set into the cyclic neural network based on an optimization method (Adam) and a loss function MSE, and optimizing and updating parameters in the cyclic neural network; the basis of the optimization and updating stop is the model load verification loss rate of the cyclic neural network;
and the prediction module is used for storing the circulating neural network model conforming to the verification loss rate, and inputting the data in the test set into the circulating neural network model conforming to the verification loss rate, so as to obtain the predicted power load.
A third aspect of the invention provides an electronic device comprising a processor and a memory, the memory storing a plurality of instructions, the processor being for reading the instructions and performing the method according to the first aspect.
A fourth aspect of the invention provides a computer readable storage medium storing a plurality of instructions readable by a processor and for performing the method of the first aspect.
The method, the device, the electronic equipment and the computer readable storage medium provided by the invention have the following beneficial technical effects:
the electric load is predicted based on the multilayer extended cyclic neural network and the attention mechanism, so that the future short-term load is predicted based on the artificial intelligence technology, and the prediction precision and the prediction speed are greatly improved compared with those of the traditional method.
Drawings
FIG. 1 is a flow chart of a method for predicting electrical loads in accordance with a preferred embodiment of the present invention;
FIG. 2 is a diagram of an electrical load system architecture according to a preferred embodiment of the present invention;
fig. 3 is a schematic structural diagram of an embodiment of an electronic device according to the present invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
Example 1
Referring to fig. 1, a method for predicting an electrical load includes:
S1, obtaining data required by electricity consumption load prediction, wherein the data comprise real historical load data of a certain area, date of corresponding time of the real historical load data, holidays and meteorological data, and the real historical load data comprise date in an electricity consumption database of an electricity user and daily multipoint electric power data information (power) in a period of time; the multipoint power data information is 96 points in the embodiment;
s2, carrying out digital coding and exception processing on date, holiday and meteorological data of the corresponding time of the real historical load data; wherein the date feature comprises: year, month, week, day; the holiday features include: holiday (holiday); the characteristics of the meteorological data include: topTem (day highest air temperature), lowTem (day lowest air temperature), avgTem (average air temperature), rain (rainfall) and hum (humidity);
s3, obtaining correlations between a plurality of non-load data and real historical load data based on a mutual information calculation mode, sorting the correlations according to the size, and selecting corresponding non-load data with strong correlations to be combined with the real historical load data to form an original data set;
S4, carrying out data specification and normalization on the original data in the original data set, and dividing the original data set into three parts: training set (60%), validation set (20%) and test set (20%);
s5, a Seq2Seq model comprising an Encoder layer and a Decoder layer is established, and real historical load data information is extracted through the Encoder layer in the Seq2Seq model;
and S6, carrying out weight calculation on the extracted information output by the Encoder layer based on an attention mechanism, thereby obtaining intermediate variables containing rich historical information.
S7, converting the information of the intermediate variable into the power load information to be predicted based on a cyclic neural network of one layer and a Decoder layer of one linear layer;
s8, based on an optimization method (Adam) and a loss function MSE, inputting selected normalized data of the training set and data of the verification set into the cyclic neural network, and optimizing and updating parameters in the cyclic neural network; the basis of the optimization and updating stop is the model load verification loss rate of the cyclic neural network;
s9: and storing the circulating neural network model conforming to the verification loss rate, and inputting the data in the test set into the circulating neural network model conforming to the verification loss rate, so as to obtain the predicted electricity load.
As a preferred embodiment, the step S2 of digitally encoding the date, holiday and weather data of the time corresponding to the real historical load data and exception handling includes the steps of:
s21, replacing non-digital original data with digital codes, such as 1-12 in the embodiment to respectively represent month of each year; the number of days per month is represented by 0 to 31, respectively; the corresponding days per week are denoted by 1-7, respectively; the holiday is represented by 1, the non-holiday is represented by 0, and the specific formula is represented by (1):
s22, the exception handling includes:
(1) According to the characteristic that the real historical load data are continuous and the loads in adjacent time do not have larger differences, when the following formula (2) is used for judging the sudden change of the load value, the sudden change of the load value is regarded as an abnormal value:
wherein p is i For a load value at a certain point in time, ζ is a threshold value between 0 and 1; when the above formula (2) is satisfied, deleting the abnormal value for subsequent correction;
(2) When the real historical load data and other related data have the condition of data loss, carrying out correction in advance; if the current missing value is single, correcting by adopting a method of close-proximity averaging, as shown in a formula (3):
p i =(p i+1 +p i-1 )/2 (3);
When load data is continuously lost for a plurality of days at the same time, a value weighted average of the same time of the previous d days of the missing value is used to replace the next value, as shown in formula (4):
wherein w is k For the weight coefficient of the previous d days, other data are discontinuous, but may have a deletion, and the method of replacing the previous d days is adopted for supplementing.
As a preferred embodiment, the step S3 of obtaining correlations between the plurality of non-load data and the real historical load data based on the mutual information calculation mode, and the step of sorting the correlations according to the size, and the step of selecting the corresponding non-load data with strong correlation and the real historical load data to combine to form the original data set includes:
s31: calculating the effect of the characteristics of the different dates and the characteristics of the meteorological data on the change of the electric load, wherein the calculation comprises the calculation of the edge probability densities p (X) and p (Y) of the electric load data X and one of the characteristic variables Y and the joint probability density function p (X, Y) of the electric load data X and the characteristic variables Y in a selected time.
S32: based on the edge probability densities p (X) and p (Y), and the joint probability density function p (X, Y) of the two, the mutual information amount I (X, Y) between the electric load data X and one of the characteristic variables Y is calculated as shown in the following formula (5):
S33: obtaining correlations between a plurality of non-load data and real historical load data through the calculation result of the mutual information, and sequencing the correlations according to the size;
s34, selecting corresponding non-load data with strong correlation as a relevant characteristic variable, and combining the non-load data with the actual historical load data to form an original data set.
As a preferred embodiment, the step S4 of performing data normalization and data normalization on the original data in the original data set, and dividing the original data set into three parts including a training set, a verification set and a test set includes:
s41: normalizing all digitally encoded data in the raw dataset into [0,1] based on the MinMaxScale method to reduce complexity of network training, the normalized formula being equation (6):
wherein n is i Represents the ith feature, n ii Representing normalized data, n imax And n imin Respectively the maximum value and the minimum value of the ith feature;
s42: dividing the normalized data into data sets, respectively setting training sets, wherein the quantity ratio of a verification set to a test set is 60%, 20% and 20%;
s43, preprocessing data in the training set, the verification set and the test set to form a sequence (seq) and label (label) structure which accords with the network training model.
As a preferred embodiment, the step 5 establishes a Seq2Seq model including an Encoder layer and a Decoder layer, and extracts the real historical load data information through the Encoder layer in the Seq2Seq model, which includes:
s51: establishing a Seq2Seq model, forming a single-layer extended LSTM structure based on an LSTM network, and using data input at the time of t and the hidden state h of a t-d node t-d And cell state c t-d As a cell input for the current LSTM structure; output h of the LSTM structure t And c t As shown in formulas (7) - (12):
f t =σ(W i ·[b t-d ,x t ]+b i ) (7);
u t =σ(W j ·[h t-d ,x t ]+b j ) (8);
v t =tanh(W z ·[h t-d ,x t ]+b z ) (9);
c t =f t *c t-d +u t *v t (10);
y t =σ(W o [h t-d ,x t ]+b o ) (11);
h t =y t *tanh(c t ) (12);
wherein σ is the activation function and tanh (·) is the hyperbolic tangent activation function; w, b are weight parameters and bias of the model, respectively; y is t A tag sequence representing the corresponding data;
s52: expanding a plurality of continuous LSTM structures as an Encoder layer, extracting real historical load data information, wherein the extracted information comprises output h of the Encoder layer i And a last hidden state S;
as a preferred embodiment, the step S6 of calculating weights of the extracted information output by the Encoder layer based on the attention mechanism, thereby obtaining intermediate variables containing rich history information, includes:
s61: inputting the extracted information output in S53 into Attention mechanism Attention layer, extracting key information with maximum possibility of the extracted information output by the Encoder layer and compressing into intermediate variable c 0 Specifically, the method is shown in formulas (13) - (15):
W i =h i *S (13);
c 0 all inputs are considered but more attention is paid to important input moments and less attention is paid to certain moments, which is the attention mechanism.
As a preferred embodiment, the step S7 includes the steps of:
output c in S61 0 Inputting a first hidden state of an LSTM layer as a Decoder layer into a network, and inputting an original input sequence into the Decoder layer; setting the predicted output sequence of the linear layer as
As a preferred embodiment, the step S8 includes the steps of:
s81: output sequence of prediction of linear layer in neural networkTag sequence y corresponding to data t The loss rate is calculated by using the mean square error MSE as the loss function, and the formula is as follows:
s82: selecting a learning rate of lr=0.005, judging whether the loss rate in the use of the verification set meets a specified constraint tau or not, if the MSE is less than tau, finishing training, and if the MSE is more than tau, performing parameter optimization; at this time, the weight parameter W and the bias b in the network are updated by using the gradient optimization function Adam.
As a preferred embodiment, the step S9 includes the steps of:
s91, confirming that the prediction of the whole model on future data reaches the requirement after the final loss value of the model meets constraint tau in the model training stage;
S92, inputting the test set data into the circulating neural network model conforming to the verification loss rate to predict the electricity load, and further optimizing the circulating neural network model conforming to the verification loss rate through a final result.
Example two
An electrical load prediction system, comprising:
the data acquisition module is used for acquiring data required by electricity consumption load prediction, wherein the data comprise real historical load data of a certain region, date of corresponding time of the real historical load data, holidays and meteorological data, and the real historical load data comprise date in an electricity consumption database of an electricity consumption user and daily multipoint electric power data information (power) in a period of time; the multipoint power data information is 96 points in the embodiment;
the data processing module is used for carrying out digital coding and exception processing on the date, holiday and meteorological data of the time corresponding to the real historical load data; wherein the date feature comprises: year, month, week, day; the holiday features include: holiday (holiday); the characteristics of the meteorological data include: topTem (day highest air temperature), lowTem (day lowest air temperature), avgTem (average air temperature), rain (rainfall) and hum (humidity);
The data set module is used for obtaining correlations between a plurality of non-load data and real historical load data based on a mutual information calculation mode, sequencing the correlations according to the size, and selecting corresponding non-load data with strong correlations to be combined with the real historical load data to form an original data set;
the data set dividing module is used for carrying out data specification and normalization on the original data in the original data set and dividing the original data set into three parts: training set (60%), validation set (20%) and test set (20%);
the information extraction module is used for establishing a Seq2Seq model and extracting real historical load data information through an Encoder layer in the Seq2Seq model;
and the intermediate variable module is used for carrying out weight calculation on the extracted information output by the Encoder layer based on the attention mechanism so as to obtain intermediate variables containing rich historical information.
The information conversion module is used for converting the information of the intermediate variable into the power load information to be predicted based on a cyclic neural network of one layer and a linear layer serving as a Decoder layer;
the neural network training module is used for inputting the selected normalized data of the training set and the data of the verification set into the cyclic neural network based on an optimization method (Adam) and a loss function MSE, and optimizing and updating parameters in the cyclic neural network; the basis of the optimization and updating stop is the model load verification loss rate of the cyclic neural network;
And a prediction module: and the data in the test set is input into the cyclic neural network model conforming to the verification loss rate, so that the predicted electricity load is obtained.
The invention also provides a memory storing a plurality of instructions for implementing the method according to embodiment one.
As shown in fig. 3, the present invention further provides an electronic device, including a processor 301 and a memory 302 connected to the processor 301, where the memory 302 stores a plurality of instructions, and the instructions may be loaded and executed by the processor, so that the processor can perform the method according to the embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. A method of predicting an electrical load, comprising:
s1, obtaining data required by electricity consumption load prediction, wherein the data comprise real historical load data of a certain area, date of corresponding time of the real historical load data, holidays and meteorological data, and the real historical load data comprise date in an electricity consumption database of an electricity user and daily multipoint electric power data information in a period of time;
s2, carrying out digital coding and exception processing on date, holiday and meteorological data of the corresponding time of the real historical load data; wherein the date feature comprises: year, month, week, day; the holiday features include: holiday (holiday); the characteristics of the meteorological data include: topTem (day highest air temperature), lowTem (day lowest air temperature), avgTem (average air temperature), rain (rainfall) and hum (humidity);
s3, obtaining correlations between a plurality of non-load data and real historical load data based on a mutual information calculation mode, sorting the correlations according to the size, and selecting corresponding non-load data with strong correlations to be combined with the real historical load data to form an original data set;
S4, carrying out data specification and normalization on the original data in the original data set, and dividing the original data set into three parts, namely a training set, a verification set and a test set;
s5, a Seq2Seq model comprising an Encoder layer and a Decoder layer is established, and real historical load data information is extracted through the Encoder layer in the Seq2Seq model;
s6, carrying out weight calculation on the extracted information output by the Encoder layer based on an attention mechanism, thereby obtaining intermediate variables containing rich historical information;
s7, converting the information of the intermediate variable into the power load information to be predicted based on a cyclic neural network of one layer and a Decoder layer of one linear layer;
s8, based on an optimization method Adam and a loss function MSE, inputting selected normalized data of the training set and data of the verification set into the cyclic neural network, and optimizing and updating parameters in the cyclic neural network; the basis of the optimization and updating stop is the model load verification loss rate of the cyclic neural network;
and S9, storing the circulating neural network model conforming to the verification loss rate, and inputting the data in the test set into the circulating neural network model conforming to the verification loss rate, so as to obtain the predicted electricity load.
2. The method for predicting electrical loads according to claim 1, wherein the step S2 of digitally encoding the date, holiday and weather data of the time corresponding to the real historical load data and exception handling comprises the steps of:
s21, the digital coding comprises replacing non-digital original data with digital coding, wherein the specific formula is shown in (1):
s22, the exception handling includes:
(1) According to the characteristic that the real historical load data are continuous and the loads in adjacent time do not have larger differences, when the following formula (2) is used for judging the sudden change of the load value, the sudden change of the load value is regarded as an abnormal value:
wherein p is i For a load value at a certain point in time, ζ is a threshold value between 0 and 1; when the above formula (2) is satisfied, deleting the abnormal value for subsequent correction;
(2) When the real historical load data and other related data have the condition of data loss, carrying out correction in advance; if the current missing value is single, correcting by adopting a method of close-proximity averaging, as shown in a formula (3):
p i =(p i+1 +p i-1 )/2 (3);
when load data is continuously lost for a plurality of days at the same time, a value weighted average of the same time of the previous d days of the missing value is used to replace the next value, as shown in formula (4):
Wherein w is k Is the weight coefficient of the previous d days.
3. The method for predicting the power consumption load according to claim 2, wherein the step S3 of obtaining correlations between the plurality of non-load data and the real historical load data based on the mutual information calculation mode, the step of sorting the correlations according to the sizes, and the step of selecting the corresponding non-load data with strong correlations to be combined with the real historical load data to form the original data set includes the steps of:
s31: calculating the effect of the characteristics of the different dates and the characteristics of the meteorological data on the change of the electric load, wherein the calculation comprises the steps of calculating the edge probability densities p (X) and p (Y) of the electric load data X and one characteristic variable Y and the joint probability density function p (X, Y) of the electric load data X and one characteristic variable Y in a selected time;
s32: based on the edge probability densities p (X) and p (Y), and the joint probability density function p (X, Y) of the two, the mutual information amount I (X, Y) between the electric load data X and one of the characteristic variables Y is calculated as shown in the following formula (5):
s33: obtaining correlations between a plurality of non-load data and real historical load data through the calculation result of the mutual information, and sequencing the correlations according to the size;
s34, selecting corresponding non-load data with strong correlation as a relevant characteristic variable, and combining the non-load data with the actual historical load data to form an original data set.
4. A method for predicting an electrical load according to claim 3, wherein S4 performs data normalization and normalization on the original data in the original data set, and divides the original data set into three parts, namely a training set, a verification set and a test set, and includes:
s41: normalizing all digitally encoded data in the raw dataset into [0,1] based on the MinMaxScale method to reduce complexity of network training, the normalized formula being equation (6):
wherein n is i Represents the ith feature, n ii Representing normalized data, n imax And n imin Respectively the maximum value and the minimum value of the ith feature;
s42: dividing the normalized data into data sets, respectively setting training sets, wherein the quantity ratio of a verification set to a test set is 60%, 20% and 20%;
s43, preprocessing data in the training set, the verification set and the test set to form a sequence label conforming to the network training model.
5. The method for predicting electrical loads according to claim 4, wherein the step of S5 of creating a Seq2Seq model, extracting real historical load data information by an Encoder layer in the Seq2Seq model comprises:
S51: establishing a Seq2Seq frame, forming a single-layer extended cyclic neural network structure LSTM structure based on an LSTM network, and using data input at the moment t and the hidden state h of a t-d node t-d And cell state c t-d As a cell input for the current LSTM structure; the output h of the extended LSTM structure t And c t As shown in formulas (7) - (12):
f t =σ(W i ·[h t-d ,x t ]+b i ) (7);
u t =σ(W j ·[h t-d ,x t ]+b j ) (8);
v t =tanh(W z ·[h t-d ,x t ]+b z ) (9);
c t =f t *c t-d +u t *v t (10);
y t =σ(W o [h t-d ,x t ]+b o ) (11);
h t =y t *tanh(c t ) (12);
wherein σ is the activation function and tanh (·) is the hyperbolic tangent activation function; w, b are respectively modelWeight parameters and bias; y is t A tag sequence representing the corresponding data;
s52: expanding a plurality of continuous LSTM structures as an Encoder layer, extracting real historical load data information, wherein the extracted information comprises output h of the Encoder layer i And a last hidden state S.
6. The method for predicting electrical loads according to claim 5, wherein S6, based on an attention mechanism, performs weight calculation on the extracted information output by the Encoder layer, thereby obtaining an intermediate variable containing rich history information, and includes:
s61: inputting the extracted information output in S53 into Attention mechanism Attention layer, extracting key information with maximum possibility of the extracted information output by the Encoder layer and compressing into intermediate variable c 0 Specifically, the method is shown in formulas (13) - (15):
W i =h i *S (13);
7. the method of claim 6, wherein the S7 Decoder layer is a layer of LSTM network and a layer of linear layer, and the converting the information of the intermediate variable into the electrical load information to be predicted comprises:
8. The method for predicting electrical loads according to claim 7, wherein S8, based on an optimization method Adam and a loss function MSE, inputs the selected normalized training set data and the validation set data into the recurrent neural network, and optimizes and updates parameters in the recurrent neural network; the basis for optimizing and updating and stopping is the model load verification loss rate of the cyclic neural network, and the method comprises the following steps of:
s81: output sequence of prediction of linear layer in neural networkThe loss rate is calculated by comparing with the tag sequence yt of the corresponding data, and the loss rate is calculated by using the mean square error MSE as a loss function, and the formula is as follows:
S82: selecting a learning rate of lr=0.005, judging whether the loss rate in the use of the test set meets a specified constraint tau or not, if the MSE is smaller than tau, finishing training, and if the MSE is larger than tau, performing parameter optimization; at this time, the weight parameter W and the bias b in the network are updated by using the gradient optimization function Adam.
9. The method according to claim 8, wherein S9 stores the recurrent neural network model conforming to the verification loss rate, and inputs the data in the test set into the recurrent neural network model conforming to the verification loss rate, thereby obtaining the predicted electrical load, comprising:
s91, confirming that the prediction of the whole model on future data reaches the requirement after the final loss value of the model meets constraint tau in the model training stage;
s92, inputting the test set data into the circulating neural network model conforming to the verification loss rate to predict the electricity load, and further optimizing the circulating neural network model conforming to the verification loss rate through a final result.
10. An electrical load prediction system, comprising:
the data acquisition module is used for acquiring data required by electricity consumption load prediction, wherein the data comprise real historical load data of a certain region, date of corresponding time of the real historical load data, holidays and meteorological data, and the real historical load data comprise date in an electricity consumption database of an electricity user and daily multipoint electric power data information in a period of time;
The data processing module is used for carrying out digital coding and exception processing on the date, holiday and meteorological data of the time corresponding to the real historical load data;
the data set module is used for obtaining correlations between a plurality of non-load data and real historical load data based on a mutual information calculation mode, sequencing the correlations according to the size, and selecting corresponding non-load data with strong correlations to be combined with the real historical load data to form an original data set;
the data set dividing module is used for carrying out data specification and normalization on the original data in the original data set and dividing the original data set into three parts, namely a training set, a verification set and a test set;
the information extraction module is used for establishing a Seq2Seq model and extracting real historical load data information through an Encoder layer in the Seq2Seq model;
the intermediate variable module is used for carrying out weight calculation on the extracted information output by the Encoder layer based on an attention mechanism so as to obtain intermediate variables containing rich historical information;
the information conversion module is used for converting the information of the intermediate variable into the power load information to be predicted based on a cyclic neural network of one layer and a Decoder layer of one linear layer;
The neural network training module is used for inputting the selected normalized data of the training set and the data of the verification set into the cyclic neural network based on an optimization method Adam and a loss function MSE, and optimizing and updating parameters in the cyclic neural network; the basis of the optimization and updating stop is the model load verification loss rate of the cyclic neural network;
and the prediction module is used for storing the circulating neural network model conforming to the verification loss rate, and inputting the data in the test set into the circulating neural network model conforming to the verification loss rate, so as to obtain the predicted power load.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211527627.3A CN116090602A (en) | 2022-11-30 | 2022-11-30 | Power load prediction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211527627.3A CN116090602A (en) | 2022-11-30 | 2022-11-30 | Power load prediction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116090602A true CN116090602A (en) | 2023-05-09 |
Family
ID=86209181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211527627.3A Pending CN116090602A (en) | 2022-11-30 | 2022-11-30 | Power load prediction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116090602A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116523277A (en) * | 2023-07-05 | 2023-08-01 | 北京观天执行科技股份有限公司 | Intelligent energy management method and system based on demand response |
CN116914747A (en) * | 2023-09-06 | 2023-10-20 | 国网山西省电力公司营销服务中心 | Power consumer side load prediction method and system |
CN117495434A (en) * | 2023-12-25 | 2024-02-02 | 天津大学 | Electric energy demand prediction method, model training method, device and electronic equipment |
CN117674098A (en) * | 2023-11-29 | 2024-03-08 | 国网浙江省电力有限公司丽水供电公司 | Multi-element load space-time probability distribution prediction method and system for different permeability |
CN117808175A (en) * | 2024-03-01 | 2024-04-02 | 南京信息工程大学 | Short-term multi-energy load prediction method based on DTformer |
CN117674098B (en) * | 2023-11-29 | 2024-06-07 | 国网浙江省电力有限公司丽水供电公司 | Multi-element load space-time probability distribution prediction method and system for different permeability |
-
2022
- 2022-11-30 CN CN202211527627.3A patent/CN116090602A/en active Pending
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116523277A (en) * | 2023-07-05 | 2023-08-01 | 北京观天执行科技股份有限公司 | Intelligent energy management method and system based on demand response |
CN116523277B (en) * | 2023-07-05 | 2023-10-20 | 北京观天执行科技股份有限公司 | Intelligent energy management method and system based on demand response |
CN116914747A (en) * | 2023-09-06 | 2023-10-20 | 国网山西省电力公司营销服务中心 | Power consumer side load prediction method and system |
CN116914747B (en) * | 2023-09-06 | 2024-01-12 | 国网山西省电力公司营销服务中心 | Power consumer side load prediction method and system |
CN117674098A (en) * | 2023-11-29 | 2024-03-08 | 国网浙江省电力有限公司丽水供电公司 | Multi-element load space-time probability distribution prediction method and system for different permeability |
CN117674098B (en) * | 2023-11-29 | 2024-06-07 | 国网浙江省电力有限公司丽水供电公司 | Multi-element load space-time probability distribution prediction method and system for different permeability |
CN117495434A (en) * | 2023-12-25 | 2024-02-02 | 天津大学 | Electric energy demand prediction method, model training method, device and electronic equipment |
CN117495434B (en) * | 2023-12-25 | 2024-04-05 | 天津大学 | Electric energy demand prediction method, model training method, device and electronic equipment |
CN117808175A (en) * | 2024-03-01 | 2024-04-02 | 南京信息工程大学 | Short-term multi-energy load prediction method based on DTformer |
CN117808175B (en) * | 2024-03-01 | 2024-05-17 | 南京信息工程大学 | DTformer-based short-term multi-energy load prediction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116090602A (en) | Power load prediction method and system | |
CN112365040B (en) | Short-term wind power prediction method based on multi-channel convolution neural network and time convolution network | |
CN110348624B (en) | Sand storm grade prediction method based on Stacking integration strategy | |
Du et al. | Power load forecasting using BiLSTM-attention | |
CN111260136A (en) | Building short-term load prediction method based on ARIMA-LSTM combined model | |
CN110580543A (en) | Power load prediction method and system based on deep belief network | |
CN113205226B (en) | Photovoltaic power prediction method combining attention mechanism and error correction | |
CN110222901A (en) | A kind of electric load prediction technique of the Bi-LSTM based on deep learning | |
CN110766212A (en) | Ultra-short-term photovoltaic power prediction method for historical data missing electric field | |
CN113905391A (en) | Ensemble learning network traffic prediction method, system, device, terminal, and medium | |
CN111401755A (en) | Multi-new-energy output scene generation method, device and system based on Markov chain | |
CN115169703A (en) | Short-term power load prediction method based on long-term and short-term memory network combination | |
CN112329990A (en) | User power load prediction method based on LSTM-BP neural network | |
CN115587454A (en) | Traffic flow long-term prediction method and system based on improved Transformer model | |
CN113554466A (en) | Short-term power consumption prediction model construction method, prediction method and device | |
Ziyabari et al. | Multibranch attentive gated resnet for short-term spatio-temporal solar irradiance forecasting | |
CN115600640A (en) | Power load prediction method based on decomposition network | |
CN113360848A (en) | Time sequence data prediction method and device | |
CN115409258A (en) | Hybrid deep learning short-term irradiance prediction method | |
CN110222910B (en) | Active power distribution network situation prediction method and prediction system | |
Čurčić et al. | Gaining insights into dwelling characteristics using machine learning for policy making on nearly zero-energy buildings with the use of smart meter and weather data | |
CN114444811A (en) | Aluminum electrolysis mixing data superheat degree prediction method based on attention mechanism | |
CN116885699A (en) | Power load prediction method based on dual-attention mechanism | |
CN116404637A (en) | Short-term load prediction method and device for electric power system | |
CN116402194A (en) | Multi-time scale load prediction method based on hybrid neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |