CN113256017A - Short-term load prediction method and system - Google Patents
Short-term load prediction method and system Download PDFInfo
- Publication number
- CN113256017A CN113256017A CN202110630901.9A CN202110630901A CN113256017A CN 113256017 A CN113256017 A CN 113256017A CN 202110630901 A CN202110630901 A CN 202110630901A CN 113256017 A CN113256017 A CN 113256017A
- Authority
- CN
- China
- Prior art keywords
- load
- historical
- prediction
- layer
- short
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000012549 training Methods 0.000 claims description 34
- 230000006870 function Effects 0.000 claims description 25
- 239000002131 composite material Substances 0.000 claims description 11
- 230000015654 memory Effects 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 8
- 230000003042 antagnostic effect Effects 0.000 claims description 5
- 238000012360 testing method Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 5
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011157 data evaluation Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Strategic Management (AREA)
- Artificial Intelligence (AREA)
- Human Resources & Organizations (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The disclosed short-term load prediction method and system include: acquiring a historical load; inputting the historical load into a trained load sequence prediction model to generate a synthesized load; performing projection operation on the synthesized load to obtain a synthesized historical load and a load to be predicted; and inputting the synthesized historical load, the historical load and the load to be predicted into a trained prediction predictor, and outputting a short-term load prediction result. Accurate prediction of short-term load is achieved.
Description
Technical Field
The invention relates to the technical field of short-term load prediction, in particular to a short-term load prediction method and a short-term load prediction system.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Accurate short-term load prediction in power distribution areas is an important task. With the penetration of renewable resources with uncertainty and the intense competition among entities in the power market environment, uncertainty presents a challenge to system operators, balancing real-time prediction errors requires expensive spare capacity and. Accurate short-term load predictions are therefore critical to uncovering uncertainty and provide a reliable basis for the real-time market.
Short-term load prediction techniques fall into two categories, namely statistical-based prediction methods and artificial intelligence-based prediction methods. Statistical-based prediction methods include linear regression models, semi-parametric additive models, and exponential smoothing models. "Modeling and Modeling of simultaneous and electric load demand" (Vaghefi A, Jafari M A, Bisse E, et al. applied Energy, vol.136, pp.186-196,2014.) proposed a method of predicting power demand by combining a multiple linear regression model with a seasonal autoregressive moving average model. The prediction method based on artificial intelligence comprises artificial neural network prediction, fuzzy system prediction, Support Vector Regression (SVR) prediction and gradient lifting decision tree model prediction. Artificial intelligence models have superior capabilities to cope with non-linear relationships between power load and other factors (e.g., weather, temperature, etc.). Representative of artificial intelligence models are deep learning models that have multiple non-linear layers that can transform lower level feature representations into higher level abstract features. In the deep learning model, a Recurrent Neural Network (RNN) is suitable for processing tasks related to sequential input, and is often applied to the fields of machine translation, speech recognition, natural language processing, and the like. The long-term memory (LSTM) network is a special RNN proposed to solve the problem of gradient disappearance of RNNs, and has higher prediction accuracy in short-term load prediction.
The existing short-term load prediction method has the following problems:
1) prediction accuracy, calculation efficiency, and the like are low based on a shallow machine learning load prediction model represented by a Support Vector Machine (SVM).
2) A large amount of effective data is difficult to obtain or the obtaining cost is expensive, and under the condition of less data quantity, the deep learning model has the problems of low generalization capability, poor prediction precision on a test set and the like, so that the prediction result of the model is inaccurate.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a short-term load prediction method and system, which implement accurate prediction of a short-term load.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
in a first aspect, a short-term load prediction method is provided, including:
acquiring a historical load;
inputting the historical load into a trained load sequence prediction model to generate a synthesized load;
performing projection operation on the synthesized load to obtain a synthesized historical load and a load to be predicted;
and inputting the synthesized historical load, the historical load and the load to be predicted into a trained load predictor, and outputting a short-term load prediction result.
In a second aspect, a short-term load prediction system is provided, comprising:
the data acquisition module is used for acquiring historical loads;
the synthetic load acquisition module is used for inputting the historical load into the trained load sequence prediction model to generate a synthetic load;
the synthetic load projection module is used for carrying out projection operation on the synthetic load to obtain a synthetic historical load and a load to be predicted;
and the load prediction module is used for inputting the synthesized historical load, the historical load and the load to be predicted into the trained load predictor and outputting a short-term load prediction result.
In a third aspect, an electronic device is provided, comprising a memory and a processor, and computer instructions stored in the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of a short-term load prediction method.
In a fourth aspect, a computer-readable storage medium is provided for storing computer instructions which, when executed by a processor, perform the steps of a method for short-term load prediction.
Compared with the prior art, the beneficial effect of this disclosure is:
1. when the short-term load prediction is carried out through the load predictor, the combined load is generated through the load sequence prediction model, the load predictor outputs the short-term load prediction result by utilizing the combined load and the historical load, the generalization capability of the load predictor is improved by utilizing the combined load generated by the load sequence prediction model, and the accuracy of the short-term load prediction is improved.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
Fig. 1 is a schematic diagram of a short-term load prediction model architecture disclosed in embodiment 1 of the present disclosure;
FIG. 2 is a comparative graph of the actual load and the load generated by the trained load prediction model disclosed in example 1 of the present disclosure;
FIG. 3 is a load sequence generated during the training process of the load sequence prediction model disclosed in embodiment 1 of the present disclosure;
FIG. 4 is a graph comparing the effect of the method disclosed in the embodiment 1 of the present disclosure with other prediction methods.
Wherein: 1. generate countermeasure network, 2, composite load, 3, input layer, 4, LSTM layer, 5, fully connected layer, 6, predicted load, 7, composite historical load, 8, historical load.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
In the present disclosure, terms such as "upper", "lower", "left", "right", "front", "rear", "vertical", "horizontal", "side", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only relational terms determined for convenience in describing structural relationships of the parts or elements of the present disclosure, and do not refer to any parts or elements of the present disclosure, and are not to be construed as limiting the present disclosure.
In the present disclosure, terms such as "fixedly connected", "connected", and the like are to be understood in a broad sense, and mean either a fixed connection or an integrally connected or detachable connection; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present disclosure can be determined on a case-by-case basis by persons skilled in the relevant art or technicians, and are not to be construed as limitations of the present disclosure.
Example 1
In order to realize accurate prediction of short-term load, in this embodiment, a short-term load prediction method is disclosed, including:
acquiring a historical load;
inputting the historical load into a trained load sequence prediction model to generate a synthesized load;
performing projection operation on the synthesized load to obtain a synthesized historical load and a load to be predicted;
and inputting the synthesized historical load, the historical load and the load to be predicted into a trained load predictor, and outputting a short-term load prediction result.
Further, the load sequence prediction model adopts a generation countermeasure network.
Further, the generation countermeasure network includes a generator by which the composite load is generated and a discriminator by which the composite load generated by the generator is judged.
Further, target functions of the generator and the discriminator are respectively constructed, the two target functions are combined to obtain a minimax target function, and the antagonistic network is trained through the minimax target function.
Further, the load predictor employs a deep LSTM network model.
Further, the depth LSTM network model comprises an input layer, an LSTM layer and a full connection layer, synthesized historical loads and historical loads are input into the LSTM layer through the input layer, the output of the LSTM layer and loads to be predicted are input into the full connection layer, and prediction results are output through the full connection layer.
Furthermore, the LSTM layer is at least one layer, the output of the upper LSTM layer is used as the input of the current layer, the multiple LSTM layers are connected, the synthesized historical load and the historical load are input into the first LSTM layer, and the output of the last LSTM layer is input into the full-connection layer.
A short-term load prediction method disclosed in the present embodiment is described in detail with reference to fig. 1 to 4.
Suppose that at time t, a point prediction needs to be givenWherein the historical load is phist, phist=[pt...pt-h]TA method for predicting a short-term load is described in detail, including:
s1: obtaining historical loads phist, phist=[pt...pt-h]T。
S2: inputting the historical load into the trained load sequence prediction model, and outputting the synthetic load of the simulated load sequence
In particular implementation, the load sequence prediction model employs a generative confrontation network (GNA) that is used to capture the potentially random processes that drive load generation, and thus will automatically generate historical data-based reality scenarios, GAN being a data-driven unsupervised learning method that consists of two networks, a generator and a discriminator. The generator will fool the discriminator as much as possible to generate a more realistic scene, while the discriminator will make the best effort to distinguish whether the scene is from real input data or generated by the generator. That is, in the present embodiment, the generator generates the composite load, and the discriminator determines the composite load generated by the generator.
Constructing a minimax objective function for generating the antagonistic network training, training the antagonistic network by utilizing historical training data based on the minimax objective function, and obtaining the trained generated antagonistic network.
The construction process of the minimax objective function is as follows: and respectively constructing an objective function of the generator and the discriminator, and combining the two objective functions to obtain a minimax objective function.
Wherein the generator has an objective function ofThe objective function of the discriminator isz is withThe machine noise input, x is the actual load sequence,is the distribution of the noise at the input,is the distribution that the actual load sequence follows.
The generator and the discriminator each minimize a respective objective function. Combining the two target functions to construct a minimax target function as follows:
the objective of the minimax objective function is to minimize the Jensen-Shannon difference between the actual data and the synthetic data sample distribution, i.e.:
s3: and performing projection operation on the synthesized load to obtain a synthesized historical load and a load to be predicted.
In specific implementation, projection operation is performed on the synthesized load G (z) to obtain synthesized historical loadAnd load to be predictedWherein h is the historical load number, and k is the load number to be predicted.
S4: inputting the combined historical load, historical load and load to be predicted into a load predictor F (x; theta)(F)) And outputting a short-term load prediction result.
In specific implementation, the load predictor is a deep LSTM network model, the deep LSTM network model includes an input layer, a plurality of LSTM layers, and a fully connected layer, the synthesized historical load and the actual historical load are used as inputs of the input layer, the plurality of LSTM layers are stacked in sequence, an output of the last LSTM layer is used as an input of the fully connected layer, the LSTM layer is used to extract time-dependent features of a load sequence and learn load laws, and the fully connected layer is used to linearly convert the features extracted by the LSTM layer into predicted loads.
The objective function using mean square error for predictor training is:
where α and β are relative weights between prediction errors for determining the actual historical load and the combined load, and M is the number of samples of the actual load to be predicted.
LSTM is a special type of RNN, aiming to address the long-term dependence of the time series. The deep LSTM model can capture temporal dependencies due to storage mechanisms inside the LSTM. The LSTM layer includes memory, storage, and activation units, and at each time step, the LSTM has three operations to decide what to store: 1) forgetting doorDetermining whether to memorize the memory cell c according to the past time step<t-1>(ii) a 2) Updating doorDetermine whether to reset it to the latest state3) Then, the output gateDeciding whether to store the cell c<t>Write activation cell a<t>。
Where the active cells of the last LSTM layer are used as input for the fully connected layer.
When the trained predictor is used for short-term load prediction, the synthesized historical load and the historical load are input into the LSTM layer through the input layer, the output of the LSTM layer and the load to be predicted are input into the full-connection layer, and the prediction result of the short-term load is output through the full-connection layer.
The generation of the countermeasure network and the predictor of the present embodiment constitutes a short-term load prediction model.
The specific process of training the short-term load prediction model to obtain the trained short-term load prediction model is as follows:
acquiring historical load training data;
training the generation countermeasure network through historical load training data to obtain a well-trained generation countermeasure network;
inputting historical load training data into a trained generation countermeasure network, and outputting synthesized load training data;
projecting the synthesized load training data to generate synthesized historical load training data and load training data to be predicted;
and inputting the synthesized historical load training data, the load training data to be predicted and the historical load training data into a predictor, and training the predictor to obtain the trained predictor.
In the short-term load prediction method disclosed by the embodiment, the GAN algorithm is extended to the STLF region, and a generator pre-trained by GAN is used to help the deep LSTM model to realize higher generalization capability. In particular, the GAN may obtain a generator that produces a simulated load scenario and use the LSTM model as a load predictor. The entire network will then connect the predictors and producers, and the initialization weights of the producers obtained by the GAN. Under this network architecture, the generator and the load predictor are made to work cooperatively to reduce the prediction error, and once the gradient relative to the prediction error flows back to the generator, it can learn how to generate the load sequence to reduce the prediction error.
Experimental verification is performed on the short-term load prediction method disclosed in the embodiment.
The data used in the experiment was from the GEFCom2012 race, and only the 2004 to 2005 historical load data of region one was used in order to test the generalization ability of the algorithm. During the implementation of STLF, the load for one hour in the future is predicted from the historical load for the previous 143 hours. Thus, the goal of GAN is to produce a 144 hour load sequence, i.e., 144 hours of composite load, and then use 143 hours of composite historical load as input to the load predictor, with the remaining one hour of load to be predicted as a label for prediction.
To test the quality of the GAN-generated payload sequence, the following two criteria, i.e., diversity and sharpness, were used for evaluation:
1) pattern diversity: the probability distribution describing the load sequence is very complex and multimodal, which means that different subsamples are concentrated under different peaks. Thus, in an experiment, a trained generator should generate a load sequence that can cover multiple scenarios, not just limited to certain specific patterns. To check the diversity of the sequences generated by the GAN, the generated sequences are randomly sampled and compared to the actual payload sequences in the test set. As shown in fig. 2, (a) is the actual load, (b) is the resultant load generated by the generator, and the ordinate: load value/kW, abscissa: time/h, the resulting composite load sequence may capture periodic characteristics of the load and may also show a trend of falling or rising. A diversity checking method is adopted to train a synthetic test set, and a real Test Set (TSTR) is tested to quantify the diversity. Specifically, the SVR algorithm is trained on a training set of data synthesized by the GAN and tested on a test set of authentic data. If the SVR algorithm is able to predict the test set data well, it can be demonstrated that the diversity of the synthetic payload sequences is sufficient for the SVR to have good generalization capability. The results are shown in Table 1.
2) Sharpness: sharpness is a concept commonly used to describe the quality of an image, when the image is sharp, the details are evident, and the contrast between adjacent pixels is higher. The evaluation was performed using a training true test set, test on synthetic test set (TRTS). 2556 noise vectors are provided to the generator according to a predefined gaussian distribution. The GAN generated samples are the same size as the test set of real samples. The same STLF task is then trained on a real sample training set using SVR. Tests were performed on the synthetic test set and the actual test set using the trained model, and the results are also listed in table 1.
The results in table 1 show that the short-term load prediction method disclosed in this embodiment has a good prediction effect, and the generated load sequence has diversity and sharpness.
TABLE 1 data evaluation results from GAN
MAPE(%) | |
True test set | 1.017 |
TRTS | 1.502 |
TSTR | 1.975 |
2.3 comparison of different algorithms
The method is compared with the algorithm of SVR and the method using only a predictor, i.e., a simple deep LSTM (the native deep LSTM) network. The algorithm was evaluated on the same test set and the MAPE results are shown in table 2. As shown in the table, the simple deep LSTM network prediction method performs better than the SVR. However, the present method showed the best generalization ability among these methods. In terms of MAPE, the performance of the method is 30.2% better than that of SVR and 12.2% better than that of simple depth LSTM.
TABLE 2 comparison of prediction results for different algorithms
To find the type of loading sequence that the generator considers to be helpful in reducing prediction error, the loading sequence generated by the generator is randomly sampled and then visualized during training. As shown in fig. 3, (a) is a load sequence generated when the number of training rounds is 100, (b) is a load sequence generated when the number of training rounds is 200, (c) is a load sequence generated when the number of training rounds is 300, and (d) is a load sequence generated when the number of training rounds is 400, and the ordinate: load value/kW, abscissa: time/h, at the beginning of training, the sequence generated by the generator does not change much compared to the real data. However, as the generator parameters are updated, noise components are added to the load sequence. They may have a regularizing effect to help the predictor to better fit the data.
In addition, to show the dynamics of the model, FIG. 4 depicts MAPE of the real data on the training set during the training process. At the beginning of the training process, the method converges faster than the method using only the predictor. The whole training process is more stable. Also, at the end of training, the method has a higher MAPE on the training set. This further demonstrates that the sequence generated by the generator is somewhat regularizing. They may prevent the predictor from overfitting and thus may improve generalization capability.
Example 2
In this embodiment, a short term load prediction system is disclosed, comprising:
the data acquisition module is used for acquiring historical loads;
the synthetic load acquisition module is used for inputting the historical load into a trained load prediction model to generate a synthetic load;
the synthetic load projection module is used for carrying out projection operation on the synthetic load to obtain a synthetic historical load and a load to be predicted;
and the load prediction module is used for inputting the synthesized historical load, the historical load and the load to be predicted into the trained load predictor and outputting a short-term load prediction result.
Example 3
In this embodiment, an electronic device is disclosed that includes a memory and a processor and computer instructions stored in the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of a short-term load prediction method disclosed in embodiment 1.
Example 4
In this embodiment, a computer readable storage medium is disclosed for storing computer instructions that, when executed by a processor, perform the steps of a short term load prediction method disclosed in embodiment 1.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
Claims (10)
1. A method for short-term load prediction, comprising:
acquiring a historical load;
inputting the historical load into a trained load sequence prediction model to generate a synthesized load;
performing projection operation on the synthesized load to obtain a synthesized historical load and a load to be predicted;
and inputting the synthesized historical load, the historical load and the load to be predicted into a trained load predictor, and outputting a short-term load prediction result.
2. The method of claim 1, wherein the load sequence prediction model employs a generative countermeasure network.
3. The method of claim 2, wherein the generating the countermeasure network includes a generator and a discriminator, wherein the generator generates the composite load, and the discriminator determines the composite load generated by the generator.
4. The method as claimed in claim 2, wherein the objective functions of the generator and the discriminator are constructed respectively, the objective functions are combined to obtain a minimax objective function, and the training of the antagonistic network is performed by the minimax objective function.
5. The method of claim 1, wherein the load predictor uses a deep LSTM network model.
6. The short-term load prediction method as claimed in claim 5, wherein the deep LSTM network model includes an input layer, an LSTM layer, and a full-link layer, the synthesized historical load and the historical load are input into the LSTM layer through the input layer, the output of the LSTM layer and the load to be predicted are input into the full-link layer, the prediction result is output through the full-link layer, and the output of the last LSTM layer is input into the full-link layer.
7. The method of claim 6, wherein the LSTM layer is at least one layer, the output of the last LSTM layer is used as the input of the current layer, the multiple LSTM layers are connected, the combined historical load and historical load are input into the first LSTM layer, and the output of the last LSTM layer is input into the fully connected layer.
8. A short term load prediction system, comprising:
the data acquisition module is used for acquiring historical loads;
the synthetic load acquisition module is used for inputting the historical load into a trained load prediction model to generate a synthetic load;
the synthetic load projection module is used for carrying out projection operation on the synthetic load to obtain a synthetic historical load and a load to be predicted;
and the load prediction module is used for inputting the synthesized historical load, the historical load and the load to be predicted into the trained load predictor and outputting a short-term load prediction result.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the steps of a method of short term load prediction as claimed in any one of claims 1 to 7.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the steps of a method of short term load prediction as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110630901.9A CN113256017A (en) | 2021-06-07 | 2021-06-07 | Short-term load prediction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110630901.9A CN113256017A (en) | 2021-06-07 | 2021-06-07 | Short-term load prediction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113256017A true CN113256017A (en) | 2021-08-13 |
Family
ID=77186772
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110630901.9A Pending CN113256017A (en) | 2021-06-07 | 2021-06-07 | Short-term load prediction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113256017A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130110756A1 (en) * | 2011-10-31 | 2013-05-02 | Siemens Corporation | Short-term Load Forecast Using Support Vector Regression and Feature Learning |
CN111191835A (en) * | 2019-12-27 | 2020-05-22 | 国网辽宁省电力有限公司阜新供电公司 | IES incomplete data load prediction method and system based on C-GAN transfer learning |
CN111950868A (en) * | 2020-07-28 | 2020-11-17 | 国网电力科学研究院有限公司 | Comprehensive energy system load scene generation method based on generation countermeasure network |
CN112163715A (en) * | 2020-10-14 | 2021-01-01 | 腾讯科技(深圳)有限公司 | Training method and device of generative countermeasure network and power load prediction method |
CN112686821A (en) * | 2020-12-30 | 2021-04-20 | 南京工程学院 | Load data repairing method based on improved countermeasure network |
-
2021
- 2021-06-07 CN CN202110630901.9A patent/CN113256017A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130110756A1 (en) * | 2011-10-31 | 2013-05-02 | Siemens Corporation | Short-term Load Forecast Using Support Vector Regression and Feature Learning |
CN111191835A (en) * | 2019-12-27 | 2020-05-22 | 国网辽宁省电力有限公司阜新供电公司 | IES incomplete data load prediction method and system based on C-GAN transfer learning |
CN111950868A (en) * | 2020-07-28 | 2020-11-17 | 国网电力科学研究院有限公司 | Comprehensive energy system load scene generation method based on generation countermeasure network |
CN112163715A (en) * | 2020-10-14 | 2021-01-01 | 腾讯科技(深圳)有限公司 | Training method and device of generative countermeasure network and power load prediction method |
CN112686821A (en) * | 2020-12-30 | 2021-04-20 | 南京工程学院 | Load data repairing method based on improved countermeasure network |
Non-Patent Citations (3)
Title |
---|
JIANGUANG ZHANG ET AL: "Deep LSTM and GAN based Short-term Load Forecasting Method at the Zone Level", 《2020 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC)》 * |
张宇帆 等: "基于生成对抗网络的负荷序列随机场景生成方法", 《供用电》 * |
肖白 等: "数据匮乏场景下采用生成对抗网络的空间负荷预测方法", 《中国电机工程学报》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11829882B2 (en) | System and method for addressing overfitting in a neural network | |
CN110991652A (en) | Neural network model training method and device and electronic equipment | |
CN109886343B (en) | Image classification method and device, equipment and storage medium | |
CN113128478B (en) | Model training method, pedestrian analysis method, device, equipment and storage medium | |
CN109902192B (en) | Remote sensing image retrieval method, system, equipment and medium based on unsupervised depth regression | |
CN111950810B (en) | Multi-variable time sequence prediction method and equipment based on self-evolution pre-training | |
CN112101485B (en) | Target device identification method, electronic device, and medium | |
CN112200296B (en) | Network model quantization method and device, storage medium and electronic equipment | |
CN111104831B (en) | Visual tracking method, device, computer equipment and medium | |
CN114897144A (en) | Complex value time sequence signal prediction method based on complex value neural network | |
CN112884569A (en) | Credit assessment model training method, device and equipment | |
CN112766402A (en) | Algorithm selection method and device and electronic equipment | |
CN114006370A (en) | Power system transient stability analysis and evaluation method and system | |
CN111160526A (en) | Online testing method and device for deep learning system based on MAPE-D annular structure | |
US11373285B2 (en) | Image generation device, image generation method, and image generation program | |
CN117406100A (en) | Lithium ion battery remaining life prediction method and system | |
CN113392867A (en) | Image identification method and device, computer equipment and storage medium | |
CN114926701A (en) | Model training method, target detection method and related equipment | |
CN113256017A (en) | Short-term load prediction method and system | |
CN110889316B (en) | Target object identification method and device and storage medium | |
CN115345303A (en) | Convolutional neural network weight tuning method, device, storage medium and electronic equipment | |
CN113128130B (en) | Real-time monitoring method and device for judging stability of direct-current power distribution system | |
CN117523218A (en) | Label generation, training of image classification model and image classification method and device | |
Li et al. | Intelligent trainer for model-based reinforcement learning | |
CN116501850A (en) | AI expert system-oriented database intelligent optimization decision method and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |