CN117592612A - Two-stage parallel integrated load prediction method, device, equipment and medium - Google Patents

Two-stage parallel integrated load prediction method, device, equipment and medium Download PDF

Info

Publication number
CN117592612A
CN117592612A CN202311620377.2A CN202311620377A CN117592612A CN 117592612 A CN117592612 A CN 117592612A CN 202311620377 A CN202311620377 A CN 202311620377A CN 117592612 A CN117592612 A CN 117592612A
Authority
CN
China
Prior art keywords
sequence
load prediction
segments
lstm
sequence length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311620377.2A
Other languages
Chinese (zh)
Inventor
张楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangquan Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Original Assignee
Yangquan Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangquan Power Supply Co of State Grid Shanxi Electric Power Co Ltd filed Critical Yangquan Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Priority to CN202311620377.2A priority Critical patent/CN117592612A/en
Publication of CN117592612A publication Critical patent/CN117592612A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Power Engineering (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of load prediction, and particularly relates to a two-stage parallel integrated load prediction method, a device, equipment and a medium. Acquiring load output history data to obtain a first data set; presetting a predicted sequence length, inputting a first data set into a basic load prediction unit, and selecting an optimal sequence length and segment number; performing section selection from the first data set according to the sequence length, so as to obtain a first sequence and a second sequence; dividing the second sequence into m sections, respectively inputting the m sections into m LSTM models, and outputting a tag sequence; superposing the first sequence and the tag sequence to obtain a third sequence; dividing the third sequence into m segments according to the number of segments, respectively inputting the m segments into m LSTM models, and outputting m segmentation prediction results by the m LSTM models; and splicing the segmented prediction results to obtain a load prediction result. By dividing the second sequence and calculating the label, combining the label with the first sequence and segmenting the label again for prediction, the training complexity and the prediction precision can be considered, and the optimal effect can be obtained.

Description

Two-stage parallel integrated load prediction method, device, equipment and medium
Technical Field
The invention belongs to the technical field of load prediction, and particularly relates to a two-stage parallel integrated load prediction method, a device, equipment and a medium.
Background
For the power load prediction task, there are typically 3 prediction methods. The 1 st way is single output prediction, i.e. one model predicts one time step. In the method, a plurality of models are required to be parallel to independently predict a plurality of future moments, and the single-step prediction accuracy is high, but the correlation influence among prediction results can be lost, and the model training is complicated; the 2 nd method is recursive prediction, and when the second and subsequent predictions are performed, the predicted value of the previous step is taken as input. Multiple steps of prediction can be realized by using one model, but the prediction error is accumulated continuously along with the increase of the number of the prediction steps, so that the prediction precision is reduced rapidly; the 3 rd method is multi-output prediction, and the model outputs a plurality of predicted values at the same time. When the deep learning is applied to train a multi-output model, the overall model loss function is a sum of a plurality of output loss functions, and thus, the error increases as the number of outputs increases. Although a plurality of predicted values can be simultaneously output and the timing relationship between the outputs is preserved, there is still a cumulative phenomenon of prediction errors.
Disclosure of Invention
The invention aims to provide a two-stage parallel integrated load prediction method, a device, equipment and a medium, which are used for solving the technical problems of complex training and low prediction precision of the existing load prediction method.
In order to achieve the above purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a two-stage parallel integrated load prediction method, including the steps of:
acquiring load output historical data, and preprocessing to obtain a first data set;
taking an LSTM network as a basic load prediction unit, presetting a predicted sequence length, inputting a first data set into the basic load prediction unit, and selecting an optimal sequence length and segment number;
selecting from the first data set according to the sequence length, thereby obtaining a first sequence, and selecting a second sequence from the first data according to the first sequence and the predicted sequence length;
dividing the second sequence into m segments according to the number of the segments, respectively inputting each segment into m LSTM multi-input single-output models, and outputting a tag sequence by the m LSTM multi-input single-output models;
superposing the first sequence and the tag sequence to obtain a third sequence;
dividing the third sequence into m segments according to the number of segments, respectively inputting each segment into m LSTM multiple-input multiple-output models, and outputting m segmentation prediction results by the m LSTM multiple-input multiple-output models;
and splicing the m segmentation prediction results to obtain a load prediction result.
The invention further improves that: the step of obtaining the load output history data and preprocessing to obtain the first data set specifically comprises the following steps:
checking abnormal values in the load output history data;
correcting the abnormal value by adopting a difference value method;
and carrying out normalization processing on the corrected data to obtain a first data set.
The invention further improves that: the LSTM network parameters further comprise a batch sampling number, a learning rate, an LSTM layer number, an LSTM layer size, an attention layer neuron number and an attention layer neuron size.
The invention further improves that: the step of taking the LSTM network as a basic load prediction unit, presetting a predicted sequence length, inputting a first data set into the basic load prediction unit, and selecting the optimal sequence length and the optimal number of segments specifically comprises the following steps:
presetting a predicted sequence length, generating a random sequence length, and inputting a first data set into a basic load prediction unit according to the random sequence length to obtain a basic prediction result;
calculating the root mean square deviation and the average absolute deviation of the basic prediction result;
and selecting the random sequence length corresponding to the minimum weighted root mean square deviation and average absolute deviation as the sequence length.
The invention further improves that: the number of segments is selected according to the length of the predicted sequence, and the number of segments is smaller than or equal to the length of the predicted sequence.
The invention further improves that: the predicted sequence length/number of segments is an integer.
The invention further improves that: the LSTM multiple-input single-output model is used for calculating the average value of each segment and outputting the average value as a label.
In a second aspect, the present invention provides a two-stage parallel integrated load prediction apparatus, comprising:
and a pretreatment module: the method comprises the steps of obtaining load output historical data, and preprocessing to obtain a first data set;
parameter acquisition module: the method comprises the steps of presetting a predicted sequence length, taking an LSTM network as a basic load prediction unit, inputting a first data set into the basic load prediction unit, and selecting the optimal sequence length and the optimal number of segments;
and a sequence dividing module: for selecting from the first data set a sequence length to obtain a first sequence, selecting from the first data a second sequence based on the first sequence and the predicted sequence length;
the tag sequence output module: dividing the second sequence into m segments according to the number of the segments, respectively inputting each segment into m LSTM multi-input single-output models, and outputting tag sequences by the m LSTM multi-input single-output models;
and a superposition module: the method comprises the steps of superposing a first sequence and a tag sequence to obtain a third sequence;
segment prediction module: dividing the third sequence into m segments according to the number of segments, respectively inputting each segment into m LSTM multiple-input multiple-output models, and outputting m segmentation prediction results by the m LSTM multiple-input multiple-output models;
and an output module: and the method is used for splicing the m segmentation prediction results to obtain a load prediction result and outputting the load prediction result.
In a third aspect, the present invention provides a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing a two-stage parallel integrated load prediction method as described above when executing the computer program.
In a fourth aspect, the present invention provides a computer readable storage medium storing a computer program which when executed by a processor implements a two-stage parallel integrated load prediction method as described above.
Compared with the prior art, the invention at least comprises the following beneficial effects:
according to the method, the second sequence is divided, the label is calculated, the label is combined with the first sequence, and the prediction is carried out in a segmented mode, so that training complexity and prediction precision can be considered, and an optimal effect can be obtained.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
In the drawings:
FIG. 1 is a flow chart of a two-stage parallel integrated load prediction method of the present invention;
FIG. 2 is a block diagram of a two-stage parallel integrated load prediction apparatus according to the present invention;
FIG. 3 is a schematic diagram of an example of a two-stage parallel integrated load prediction method according to the present invention;
FIG. 4 is a graph of the cumulative distribution function of absolute errors in example 5 of a two-stage parallel integrated load prediction method according to the present invention;
FIG. 5 is a scatter diagram of the predicted results of the different training methods of example 5 in a two-stage parallel integrated load prediction method according to the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings in connection with embodiments. It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The following detailed description is exemplary and is intended to provide further details of the invention. Unless defined otherwise, all technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the invention.
Example 1
A two-stage parallel integrated load prediction method, as shown in figure 1, comprises the following steps:
s1, acquiring load output historical data, and preprocessing to obtain a first data set;
specifically, the load output history data is selected to be about two years long;
specifically, the pretreatment comprises the following steps:
checking abnormal values in the load output history data;
correcting the abnormal value by adopting a difference value method;
normalizing the corrected data to obtain a first data set;
in the provided load output history data, some time periods of data are empty or obviously wrong data, and the data in the time periods are called missing values. And for the processing of the missing value data, a difference method is adopted, and the average value is used as the missing value to fill.
For example, for time periods 1,2,3, respectively, there is data x 1 ,x 2 ,x 3 When x is 1 ,x 3 Known and x 2 When unknown, x satisfying the following formula can be selected 2 And filling the data to ensure the continuity of the data.
x 2 =(x 1 +x 3 )/2;
S2, taking an LSTM network as a basic load prediction unit, presetting a predicted sequence length, inputting a first data set into the basic load prediction unit, and selecting an optimal sequence length and segment number;
specifically, the LSTM network parameters further include a batch sampling number, a learning rate, a LSTM layer number, a LSTM layer size, a attention layer neuron number, and an attention layer neuron size, where the parameters are all manually preset values;
specifically, the step of taking the LSTM network as a base load prediction unit, presetting a predicted sequence length h, inputting the first data set into the base load prediction unit, and selecting an optimal sequence length and segment number specifically includes:
presetting a predicted sequence length, generating a random sequence length, and inputting a first data set into a basic load prediction unit according to the random sequence length to obtain a basic prediction result;
calculating the root mean square deviation and the average absolute deviation of the basic prediction result;
selecting a random sequence length corresponding to the minimum weighted root mean square deviation and average absolute deviation as a sequence length;
specifically, the number of segments is selected according to the length of the predicted sequence, and the greater the number of segments, the higher the accuracy, but the higher the corresponding cost.
Specifically, the multi-output step length is h/m, h is the length of the predicted sequence, and m is the number of segments; the sequence length is denoted by k.
Specifically, the number of segments is less than the predicted sequence length, and h/m is an integer.
S3, selecting from the first data set according to the sequence length, so that a first sequence is obtained, and selecting a second sequence from the first data according to the first sequence and the predicted sequence length;
specifically, the first sequence is expressed as (x 1 、x 2 ...x k );
Specifically, the second sequence is represented as (x 1+k 、x 2+k ...x k+h )
S4, dividing the second sequence into m segments according to the number of the segments, respectively inputting each segment into m LSTM multi-input single-output models, and outputting a tag sequence by the m LSTM multi-input single-output models;
specifically, the LSTM multiple-input single-output model is used to calculate the average value of each segment as a label, denoted as y, and the label sequence as (y 1 、y 2 ...y m )
Specifically, the tag is expressed by the following formula:
s5, superposing the first sequence and the tag sequence to obtain a third sequence;
specifically, the calculation flow in S5-S6 is shown in FIG. 3;
s6, dividing the third sequence into m segments according to the number of the segments, respectively inputting each segment into m LSTM multiple-input multiple-output models, and outputting m segmentation prediction results by the m LSTM multiple-input multiple-output models;
and S7, splicing the m segmentation prediction results to obtain a load prediction result.
Specifically, in S4, the first sequence overlaps the tag sequence to jointly form m groups of new input data with the length of k+1, the new tag is the data with the original length of h/m when y is calculated, m groups of second-stage tags are generated in total, and the requirement of an LSTM parallel multi-output model is met. And the second stage data is subjected to m groups of model parallel prediction, and then a final prediction result with the length of h is obtained by splicing.
Example 2
A two-stage parallel integrated load prediction apparatus, as shown in fig. 2, comprising:
and a pretreatment module: the method comprises the steps of obtaining load output historical data, and preprocessing to obtain a first data set;
specifically, the load output history data is selected to be about two years long;
specifically, the pretreatment comprises the following steps:
checking abnormal values in the load output history data;
correcting the abnormal value by adopting a difference value method;
normalizing the corrected data to obtain a first data set;
in the provided load output history data, some time periods of data are empty or obviously wrong data, and the data in the time periods are called missing values. And for the processing of the missing value data, a difference method is adopted, and the average value is used as the missing value to fill.
For example, for time periods 1,2,3, respectively, there is data x 1 ,x 2 ,x 3 When x is 1 ,x 3 Known and x 2 When unknown, x satisfying the following formula can be selected 2 And filling the data to ensure the continuity of the data.
x 2 =(x 1 +x 3 )/2;
Parameter acquisition module: the method comprises the steps of presetting a predicted sequence length, taking an LSTM network as a basic load prediction unit, inputting a first data set into the basic load prediction unit, and selecting the optimal sequence length and the optimal number of segments;
specifically, the LSTM network parameters further include a batch sampling number, a learning rate, a LSTM layer number, a LSTM layer size, a attention layer neuron number, and an attention layer neuron size, where the parameters are all manually preset values;
specifically, the step of taking the LSTM network as a base load prediction unit, presetting a predicted sequence length h, inputting the first data set into the base load prediction unit, and selecting an optimal sequence length and segment number specifically includes:
presetting a predicted sequence length, generating a random sequence length, and inputting a first data set into a basic load prediction unit according to the random sequence length to obtain a basic prediction result;
calculating the root mean square deviation and the average absolute deviation of the basic prediction result;
selecting a random sequence length corresponding to the minimum weighted root mean square deviation and average absolute deviation as a sequence length;
specifically, the number of segments is selected according to the length of the predicted sequence, and the greater the number of segments, the higher the accuracy, but the higher the corresponding cost.
Specifically, the multi-output step length is h/m, h is the length of the predicted sequence, and m is the number of segments; the sequence length is denoted by k.
Specifically, the number of segments is less than the predicted sequence length, and h/m is an integer.
And a sequence dividing module: for selecting from the first data set a sequence length to obtain a first sequence, selecting from the first data a second sequence based on the first sequence and the predicted sequence length;
specifically, the first sequence is expressed as (x 1 、x 2 ...x k );
Specifically, the second sequence is represented as (x 1+k 、x 2+k ...x k+h )
The tag sequence output module: dividing the second sequence into m segments according to the number of the segments, respectively inputting each segment into m LSTM multi-input single-output models, and outputting tag sequences by the m LSTM multi-input single-output models;
specifically, the LSTM multiple-input single-output model is used to calculate the average value of each segment as a label, denoted as y, and the label sequence as (y 1 、y 2 ...y m )
Specifically, the tag is expressed by the following formula:
and a superposition module: the method comprises the steps of superposing a first sequence and a tag sequence to obtain a third sequence;
specifically, the calculation flow in S5-S6 is shown in FIG. 3;
segment prediction module: dividing the third sequence into m segments according to the number of segments, respectively inputting each segment into m LSTM multiple-input multiple-output models, and outputting m segmentation prediction results by the m LSTM multiple-input multiple-output models;
and an output module: and the method is used for splicing the m segmentation prediction results to obtain a load prediction result and outputting the load prediction result.
Specifically, in S4, the first sequence overlaps the tag sequence to jointly form m groups of new input data with the length of k+1, the new tag is the data with the original length of h/m when y is calculated, m groups of second-stage tags are generated in total, and the requirement of an LSTM parallel multi-output model is met. And the second stage data is subjected to m groups of model parallel prediction, and then a final prediction result with the length of h is obtained by splicing.
Example 3
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing a two-phase parallel integrated load prediction method as described above when executing the computer program.
Example 4
A computer readable storage medium storing a computer program which when executed by a processor implements a two-stage parallel integrated load prediction method as described above.
Example 5
An example of the method according to example 1;
aiming at the load sequence, the CLA-GMDN model respectively adopts a two-stage parallel integrated prediction method, a single-step prediction method, a recursion prediction method and a multi-step prediction method to train and test 24-hour time sequence in future day. And the point prediction results are evaluated by NRMSD and NMAD indexes, and the comparison results are shown in table 1.
Table 1 comparison of predictive performance for various training methods
N can be seen that the NRMSD and NMAD indexes of the two-stage parallel integrated prediction method are optimal for the point prediction task. Compared with a single-step prediction method, the normalized average absolute error is reduced by 65.8%,76.2% and 56.7% by a recursive prediction method and a multi-step prediction method respectively. The recursive predictive method is the worst because the number of predicted steps in this experiment is 24, and the error increases with the number of predicted steps, resulting in a prediction error accumulation. Single-step prediction and multi-step prediction indexes are relatively close. In terms of model training complexity, the single-step training method needs to train 24 models, and each model predicts the characteristic period; the two-stage parallel prediction method needs to train 12 models; the recursive prediction method and the multi-step prediction method can complete the prediction task of 24 time periods in the future by training 1 model. The two-stage prediction method not only has lower model complexity than the single-step prediction method, but also has higher prediction precision than the single-step prediction method. Combining accuracy and training complexity, comprehensively sequencing the merits of the four training methods: two-stage parallel integrated prediction method > multi-step prediction method > single-step prediction method > recursive prediction method.
To further demonstrate the performance of the class 4 training method, the absolute error probability density function over its test set is shown in FIG. 4:
the intersection points in fig. 4 indicate: the class 4 training method has an absolute error of less than 0.15 in 99.3% of the test samples. The absolute error of 98.3% of test data in the two-stage parallel integrated prediction method is smaller than 0.05, and the absolute error of 95.2% of test data in the single-step prediction method, the multi-step prediction method and the recursive prediction method are smaller than 0.05 respectively. The intersection point (0.008,0.222) shows that the single-step prediction method and the recursive prediction method have an absolute error of 22.2% of samples less than 0.008, which is slightly more than that of the two-stage integrated prediction method, mainly because the single-point prediction has high accuracy on the next time sequence of the current sample, but the accuracy drops rapidly along with the increase of the prediction time sequence, and the accuracy is greatly behind the two-stage parallel integration method after the intersection point (0.008,0.222) is shown in the figure. The good and bad sequences of the training modes obtained according to the absolute error cumulative distribution function are consistent with the above.
Fig. 5 illustrates the prediction results of the two-stage parallel integrated prediction method and the comparative prediction algorithm in the form of a scatter diagram. In the figure, the abscissa is a predicted value, the ordinate is an actual value, the black dotted line in the figure indicates that the predicted value is equal to the actual value, and the scattered point is a model predicted result. The closer the scatter is to the dashed line, the higher the model prediction accuracy is. Compared with a comparison model, the prediction value of the two-stage parallel integrated prediction method is most converged, and the recursive prediction method is most dispersed.
To sum up: when the LSTM model is used as an example for time sequence prediction, the two-stage parallel integrated prediction method provided by the patent is adopted, so that the training complexity and the prediction precision can be considered, and the optimal effect can be obtained.
It will be appreciated by those skilled in the art that the present invention can be carried out in other embodiments without departing from the spirit or essential characteristics thereof. Accordingly, the above disclosed embodiments are illustrative in all respects, and not exclusive. All changes that come within the scope of the invention or equivalents thereto are intended to be embraced therein.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (10)

1. The two-stage parallel integrated load prediction method is characterized by comprising the following steps of:
acquiring load output historical data, and preprocessing to obtain a first data set;
taking an LSTM network as a basic load prediction unit, presetting a predicted sequence length, inputting a first data set into the basic load prediction unit, and selecting an optimal sequence length and segment number;
selecting from the first data set according to the sequence length, thereby obtaining a first sequence, and selecting a second sequence from the first data according to the first sequence and the predicted sequence length;
dividing the second sequence into m segments according to the number of the segments, respectively inputting each segment into m LSTM multi-input single-output models, and outputting a tag sequence by the m LSTM multi-input single-output models;
superposing the first sequence and the tag sequence to obtain a third sequence;
dividing the third sequence into m segments according to the number of segments, respectively inputting each segment into m LSTM multiple-input multiple-output models, and outputting m segmentation prediction results by the m LSTM multiple-input multiple-output models;
and splicing the m segmentation prediction results to obtain a load prediction result.
2. The method for predicting load by two-stage parallel integration according to claim 1, wherein the step of obtaining load output history data and preprocessing to obtain the first data set specifically comprises:
checking abnormal values in the load output history data;
correcting the abnormal value by adopting a difference value method;
and carrying out normalization processing on the corrected data to obtain a first data set.
3. The method of claim 1, wherein the LSTM network parameters further include a number of batch samples, a learning rate, a number of LSTM layer layers, a LSTM layer size, a number of attention layer neurons, and a attention layer neuron size.
4. The two-stage parallel integrated load prediction method according to claim 1, wherein the step of using the LSTM network as a base load prediction unit, presetting a predicted sequence length, inputting the first data set into the base load prediction unit, and selecting an optimal sequence length and number of segments specifically comprises:
presetting a predicted sequence length, generating a random sequence length, and inputting a first data set into a basic load prediction unit according to the random sequence length to obtain a basic prediction result;
calculating the root mean square deviation and the average absolute deviation of the basic prediction result;
and selecting the random sequence length corresponding to the minimum weighted root mean square deviation and average absolute deviation as the sequence length.
5. The two-stage parallel integrated load prediction method according to claim 1, wherein the number of segments is selected according to the predicted sequence length, and the number of segments is less than or equal to the predicted sequence length.
6. A two-phase parallel integrated load prediction method according to claim 1, wherein the predicted sequence length/number of segments is an integer.
7. The two-stage parallel integrated load prediction method according to claim 1, wherein the LSTM multiple-input single-output model is used to calculate an average value of each segment and output as a label.
8. A two-stage parallel integrated load prediction apparatus, comprising:
and a pretreatment module: the method comprises the steps of obtaining load output historical data, and preprocessing to obtain a first data set;
parameter acquisition module: the method comprises the steps of presetting a predicted sequence length, taking an LSTM network as a basic load prediction unit, inputting a first data set into the basic load prediction unit, and selecting the optimal sequence length and the optimal number of segments;
and a sequence dividing module: for selecting from the first data set a sequence length to obtain a first sequence, selecting from the first data a second sequence based on the first sequence and the predicted sequence length;
the tag sequence output module: dividing the second sequence into m segments according to the number of the segments, respectively inputting each segment into m LSTM multi-input single-output models, and outputting tag sequences by the m LSTM multi-input single-output models;
and a superposition module: the method comprises the steps of superposing a first sequence and a tag sequence to obtain a third sequence;
segment prediction module: dividing the third sequence into m segments according to the number of segments, respectively inputting each segment into m LSTM multiple-input multiple-output models, and outputting m segmentation prediction results by the m LSTM multiple-input multiple-output models;
and an output module: and the method is used for splicing the m segmentation prediction results to obtain a load prediction result and outputting the load prediction result.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements a two-phase parallel integrated load prediction method according to any of claims 1-7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements a two-phase parallel integrated load prediction method according to any one of claims 1-7.
CN202311620377.2A 2023-11-29 2023-11-29 Two-stage parallel integrated load prediction method, device, equipment and medium Pending CN117592612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311620377.2A CN117592612A (en) 2023-11-29 2023-11-29 Two-stage parallel integrated load prediction method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311620377.2A CN117592612A (en) 2023-11-29 2023-11-29 Two-stage parallel integrated load prediction method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117592612A true CN117592612A (en) 2024-02-23

Family

ID=89918049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311620377.2A Pending CN117592612A (en) 2023-11-29 2023-11-29 Two-stage parallel integrated load prediction method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117592612A (en)

Similar Documents

Publication Publication Date Title
CN110472779B (en) Power system short-term load prediction method based on time convolution network
CN111950810B (en) Multi-variable time sequence prediction method and equipment based on self-evolution pre-training
CN108564326A (en) Prediction technique and device, computer-readable medium, the logistics system of order
CN111027732A (en) Method and system for generating multi-wind-farm output scene
CN115145812B (en) Test case generation method and device, electronic equipment and storage medium
CN110633859A (en) Hydrological sequence prediction method for two-stage decomposition integration
CN117592612A (en) Two-stage parallel integrated load prediction method, device, equipment and medium
CN115618751B (en) Steel plate mechanical property prediction method
Kwak et al. Quantization aware training with order strategy for CNN
CN109800866B (en) Reliability increase prediction method based on GA-Elman neural network
CN112667394B (en) Computer resource utilization rate optimization method
CN115759455A (en) Load probability density prediction method based on time sequence Gaussian mixture density network
CN114399901B (en) Method and equipment for controlling traffic system
CN115081609A (en) Acceleration method in intelligent decision, terminal equipment and storage medium
CN115907000A (en) Small sample learning method for optimal power flow prediction of power system
CN113343468A (en) Method, device and equipment for carrying out multi-step prediction by SARIMA model
CN113095328A (en) Self-training-based semantic segmentation method guided by Gini index
CN112070283A (en) Server operation health degree prediction method and system based on machine learning
Uta et al. Towards machine learning based configuration
CN114168320B (en) End-to-end edge intelligent model searching method and system based on implicit spatial mapping
CN113569660B (en) Learning rate optimization algorithm discount coefficient method for hyperspectral image classification
CN118332481A (en) Project data abnormity early warning method and system for power construction
US20220405599A1 (en) Automated design of architectures of artificial neural networks
CN113344142A (en) Training method, device, equipment and storage medium of SARIMA model
CN117974208A (en) Regional passenger flow prediction system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination