CN110232483B - Deep learning load prediction method and device and terminal equipment - Google Patents

Deep learning load prediction method and device and terminal equipment Download PDF

Info

Publication number
CN110232483B
CN110232483B CN201910527965.9A CN201910527965A CN110232483B CN 110232483 B CN110232483 B CN 110232483B CN 201910527965 A CN201910527965 A CN 201910527965A CN 110232483 B CN110232483 B CN 110232483B
Authority
CN
China
Prior art keywords
sub
load
prediction
time window
moving time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910527965.9A
Other languages
Chinese (zh)
Other versions
CN110232483A (en
Inventor
王颖
荆志朋
邵华
张章
张倩茅
任志刚
齐晓光
张丽洁
袁博
刘芮
习朋
朱士加
赵洪山
任惠
闫西慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Economic and Technological Research Institute of State Grid Hebei Electric Power Co Ltd
Original Assignee
North China Electric Power University
Economic and Technological Research Institute of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University, Economic and Technological Research Institute of State Grid Hebei Electric Power Co Ltd filed Critical North China Electric Power University
Priority to CN201910527965.9A priority Critical patent/CN110232483B/en
Publication of CN110232483A publication Critical patent/CN110232483A/en
Application granted granted Critical
Publication of CN110232483B publication Critical patent/CN110232483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Abstract

The invention is suitable for the technical field of data prediction, and provides a deep learning load prediction method, a deep learning load prediction device and terminal equipment. The deep learning load prediction method comprises the following steps: dividing a prediction interval into a plurality of first sub-prediction intervals by using a first preset moving time window, and dividing historical load data into a plurality of first sub-load training data, wherein the length of the historical load data corresponds to the length of the prediction interval; training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data, and obtaining a load prediction value of each sub-prediction interval; and determining a final predicted value according to the predicted value of each sub-prediction interval. The deep learning load prediction method is suitable for load prediction of different time lengths by adjusting the fine granularity of the segmentation prediction interval and selecting proper historical load data, and the obtained prediction value is more accurate and reliable.

Description

Deep learning load prediction method and device and terminal equipment
Technical Field
The invention belongs to the field of data prediction, and particularly relates to a deep learning load prediction method, a deep learning load prediction device and terminal equipment.
Background
The load prediction is to scientifically predict the load of hours, days or months in the future according to the historical load change rule and by combining factors such as weather, temperature, economy, politics and the like.
At present, most of traditional load prediction methods need to adopt different load prediction methods according to different prediction time, and the load prediction model constructed in the way is high in complexity and low in universality. And with the penetration and continuous growth of new energy sources (such as wind power generation, photovoltaic power generation and the like) and the power transfer of flexible and controllable loads, the randomness and uncertainty of the power load are continuously increased, so that higher requirements on the accuracy and reliability of load prediction are provided.
Disclosure of Invention
In view of this, embodiments of the present invention provide a deep learning load prediction method, an apparatus, and a terminal device, so as to solve the problems that a load prediction model constructed in the prior art is high in complexity, low in universality, and inaccurate and reliable in load prediction.
The first aspect of the embodiments of the present invention provides a deep learning load prediction method, including:
dividing a prediction interval into a plurality of first sub-prediction intervals by using a first preset moving time window, and dividing historical load data into a plurality of first sub-load training data, wherein the length of the historical load data corresponds to the length of the prediction interval;
training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data, and obtaining a load prediction value of each sub-prediction interval;
and determining a final predicted value according to the predicted value of each sub-prediction interval.
Preferably, the first preset moving time window is obtained by a clustering algorithm, and the process is as follows:
randomly acquiring a group of second preset moving time windows, and dividing the historical load data into a plurality of groups of second sub-load training data by using the group of second preset time windows, wherein the plurality of groups of second sub-load training data are used as the input of a clustering algorithm;
finding the minimum clustering number according to the measuring standard of the clustering algorithm;
and selecting the minimum value of the cluster number, and taking the moving time window with the maximum length corresponding to the minimum value as the first preset moving time window.
Preferably, mean profile coefficients and interval statistics are used as metrics for the clustering algorithm.
Preferably, the setting method of the group of second preset moving time windows is as follows:
the initial length of each second preset moving time window is the same, the second preset moving time window is utilized to divide the historical load data into a group of second sub-load training data, and after a preset step length delta T is added to the last length of the second preset moving time window, the historical load data are divided according to the current length of the added second preset moving time window until a plurality of groups of second sub-load training data are obtained; or
And setting the length of the second preset moving time window according to the change rate of the historical load data, and dividing the historical load data according to the increased current length of the second preset moving time window after increasing the previous length of the second preset moving time window by a preset step length delta T until a plurality of groups of second sub-load training data are obtained.
Preferably, the moving mode of the first preset moving time window is as follows:
when the current first preset moving time window executes the next movement, if the preset moving step length delta T meets the condition that delta T is equal to Twin,iIf the current first preset moving time window is within the preset time window, the starting position of the next first preset moving time window is connected with the ending position of the current first preset moving time window, wherein Twin,iThe length of the current first preset moving time window is obtained; or
When the current first preset moving time window executes the next step of movement, if the preset moving step length delta t meets delta t<Twin,iIf the starting position of the next first preset moving time window is overlapped with the current first preset moving time window, Twin,iThe length of the current first preset moving time window is obtained;
the moving mode of the second preset moving time window is as follows:
when the current second preset moving time window executes the next step of moving, if the preset moving step length delta T meets the condition that delta T is equal to Twin,iIf the starting position of the next second preset moving time window is linked with the ending position of the current second preset moving time window, wherein Twin,iThe length of the current second preset moving time window; or
The current second preset moving time window executes the next stepDuring moving, if the preset moving step length delta t meets delta t<Twin,iThen the starting position of the next second preset moving time window and the current second preset moving time window Twin,iThere is an overlap, wherein Twin,iThe length of the current second preset moving time window.
Preferably, before training the deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and obtaining the load prediction value of each sub-prediction interval, the deep learning load prediction method further includes:
preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data;
normalizing the preprocessed multiple first sub-load training data;
the training of the deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and the obtaining of the load prediction value of each sub-prediction interval specifically include:
preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data;
normalizing the preprocessed multiple first sub-load training data;
and training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data after normalization processing, and obtaining a load prediction value of each sub-prediction interval.
Preferably, the deep learning prediction model is a deep LSTM prediction model constructed by using a Keras deep learning framework, and the training process of the deep LSTM prediction model is as follows:
acquiring first sub-load training data of an ith sub-prediction interval;
searching the LSTM hyper-parameter by using a grid;
and establishing a depth LSTM prediction model by using the first sub-load training data and the LSTM hyper-parameter.
Preferably, the algorithm of the deep learning prediction model is a deep self-coding neural network.
A second aspect of an embodiment of the present invention provides a deep learning load prediction apparatus, including:
the segmentation module is used for dividing the prediction interval into a plurality of first sub-prediction intervals and dividing the historical load data into a plurality of first sub-load training data;
the prediction module is used for training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and obtaining a load prediction value of each sub-prediction interval;
and the determining module is used for determining a final predicted value according to the load predicted values of the sub-prediction intervals.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the deep learning load prediction method according to any one of the above items when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the deep learning load prediction method as described in any one of the above.
The embodiment of the invention provides a deep learning load prediction method, which comprises the steps of dividing a prediction interval into a plurality of sub-prediction intervals through a preset moving time window, and dividing historical load data into a plurality of sub-load training data; training a plurality of corresponding sub-prediction intervals through a plurality of sub-load training data, adjusting the fine granularity of the segmented prediction intervals and selecting proper historical load data by adjusting the length of the first preset moving time window, and being suitable for load prediction of different time lengths.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart of a deep learning load prediction method according to the present invention;
FIG. 2 is a diagram illustrating an exemplary deep learning load prediction method according to the present invention;
FIG. 3 is a schematic diagram of time window slicing provided by an embodiment of the present invention;
FIG. 4 is a comparison graph of load predictions provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a deep learning load prediction apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows a schematic flow chart of a deep load prediction method provided by the present invention, and referring to fig. 1, the deep learning load prediction method provided by the present invention is described in detail as follows.
Step S101, a prediction interval is divided into a plurality of first sub-prediction intervals by using a first preset moving time window, and historical load data is divided into a plurality of first sub-load training data, wherein the length of the historical load data corresponds to the length of the prediction interval.
Aiming at short-term load prediction, the traditional load prediction method generally adopts different clustering algorithms to historical load data to find out a load category similar to a prediction day, and trains a front-stage BP cascade neural network and a back-stage BP cascade neural network according to the load category similar to the prediction day; for medium and long term load prediction, a functional non-parametric regression load prediction algorithm is generally established through a functional data analysis theory and a non-parametric kernel density estimation method, and then a prediction curve of a non-parametric regression model is corrected through quadratic programming to finally obtain a prediction curve of a specified prediction interval; therefore, in the conventional load prediction methods, different load prediction methods are usually designed for load prediction intervals with different durations, and the methods have low universality or the constructed load prediction models are very complex.
Specifically, in the embodiment of the invention, the prediction interval and the historical load data are firstly divided by using the first preset moving time window, so that the load prediction interval with longer time length can be divided into the load prediction interval with shorter time length, and the simplification of a load prediction model and the unification of multiple load prediction methods are facilitated.
The historical load data represents load data in a past period of time, the length of the historical load data corresponds to the length of the prediction interval, and the historical load data with the corresponding length is selected according to the duration of the specific prediction interval; the duration of the prediction interval can be divided into an ultra-short term, a medium term and a long term according to the purpose of load prediction, the prediction interval of ultra-short term load prediction is generally controlled within one hour in the future, the short term load prediction refers to daily load prediction or weekly load prediction, the medium term load prediction refers to load prediction from month to year, the long term load prediction refers to load prediction within a period of 3-5 years or even longer in the future, and the duration of the prediction interval is set autonomously according to actual needs.
Preferably, the first preset moving time window can be obtained by a clustering algorithm, and the process of obtaining the first preset moving time window by the clustering algorithm is as follows: randomly acquiring a group of second preset moving time windows, and dividing the historical load data into a plurality of groups of second sub-load training data by using the group of second preset time windows, wherein the plurality of groups of second sub-load training data are used as the input of a clustering algorithm; finding the minimum clustering number according to the measuring standard of the clustering algorithm; and selecting the minimum value of the cluster number, and taking the moving time window with the maximum length corresponding to the minimum value as the first preset moving time window.
Specifically, the clustering algorithm is a statistical analysis method for researching classification problems, is also an important algorithm for data mining, and can deeply mine the internal relation between historical load data by utilizing the clustering algorithm, so that the accuracy of load prediction is improved.
The clustering algorithm can be divided into a Partitioning method (Partitioning Methods), a Hierarchical method (Hierarchical Methods), a Density-Based method (Density-Based Methods), a Grid-Based method (Grid-Based Methods), a Model-Based method (Model-Based Methods), and the like, the specific clustering algorithm can be selected independently, different measurement standards can be set according to actual requirements to find the minimum clustering number, and the moving time window corresponding to the minimum clustering number is used as the first preset moving time window, so that the complexity of the constructed load prediction Model can be reduced.
Preferably, mean profile coefficients and interval statistics are used as metrics for the clustering algorithm.
The contour coefficient is an evaluation mode with good and bad clustering effect, and the influence of different clustering algorithms or different operation modes of the same clustering algorithm on clustering results is evaluated by combining two factors of cohesion and separation; specifically, in the embodiment of the present invention, the profile coefficient of the ith sub-load training data sample is:
Figure BDA0002098827820000071
in the formula:
for the ith sub-load training data sample, calculate it to all others in its classAverage distance of samples, denoted ai
Calculating the average distance between the ith sub-load training data sample and all samples in different classes closest to the ith sub-load training data sample, and recording the average distancei(ii) a Where "distance" refers to the degree of dissimilarity, the greater the distance, the greater the degree of dissimilarity, and the euclidean distance satisfies this condition, and therefore the distance between the above samples is calculated using the euclidean distance, that is:
Figure BDA0002098827820000072
in the formula: d (x, y) represents the euclidean distance between the two data samples x and y; the length of two samples is n.
The value of the contour coefficient is in the range of [ -1,1], and the larger the value of the contour coefficient is, the better the cohesion and the separation are;
and averaging the contour coefficients of all the sub-load training data samples to obtain the contour coefficient of the clustering result.
Wherein interval statistics (Gap statistical) is used to solve the clustering problem to determine the number of determined classes; it compares the total intra-cluster variation of the different cluster number k values with its expected value under the reference load data distribution. This reference data is generated using the monte carlo method, i.e. for each sample, its maximum and minimum values are calculated, and then random numbers from the minimum to the maximum are generated uniformly at random. For actual data versus reference data, different values of cluster k are used to calculate the variation within the total cluster. The interval statistic is calculated as follows, given the value of the cluster number k:
Figure BDA0002098827820000073
Figure BDA0002098827820000074
Figure BDA0002098827820000081
in the formula:
Figure BDA0002098827820000082
representing the expectation of a sample of reference length n; crDenotes the r-th cluster class, nr=|Cr|,DrRepresenting the euclidean distance between sample points within a class.
The evaluation index is calculated as follows:
1) clustering actual multiple sub-load training data sets, and changing the clustering number k to 1,2, …, kamxAnd calculating W corresponding theretok
2) Generating and clustering reference data sets, changing the number k of clusters to 1,2, …, kamxCalculate the corresponding Gn(k);
3) Order to
Figure BDA0002098827820000083
B is the number of reference data sets generated. Then calculate the standard deviation
Figure BDA0002098827820000084
And is provided with
Figure BDA0002098827820000085
Finally, the minimum clustering number k is selected, which satisfies Gn(k)≥Gn(k+1)-sk+1. And k is the optimal cluster number, and the maximum length moving time window corresponding to the cluster number is selected as the first preset moving time window.
When the minimum cluster numbers selected by the two kinds of measuring standards are different, the cluster number with relatively smaller cluster data is preferentially selected as the optimal cluster number so as to determine the length of the first preset moving time window.
Step S102, training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data, and obtaining a load prediction value of each sub-prediction interval.
Specifically, the deep learning prediction model of the plurality of first sub-prediction intervals is trained by using the plurality of first sub-load training data, so that the obtained load prediction value of each sub-prediction interval can be more accurate.
Step S103, determining a final predicted value according to the predicted value of each sub-prediction interval.
Specifically, the predicted values of the sub-prediction intervals are integrated according to a time sequence, and the integrated predicted values are subjected to inverse normalization to obtain the final predicted value.
The invention provides a deep learning load prediction algorithm, which comprises the steps of dividing a prediction interval into a plurality of first sub-prediction intervals by utilizing a first preset moving time window, and dividing historical load data into a plurality of first sub-load training data, wherein the length of the historical load data corresponds to the length of the prediction interval; training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data, and obtaining a load prediction value of each sub-prediction interval; and determining a final predicted value according to the predicted value of each sub-prediction interval.
Therefore, in the invention, the prediction interval and the historical load data can be segmented by utilizing the first preset moving time window, the deep learning prediction model of the first sub-prediction interval is trained by utilizing the first sub-load training data obtained after segmentation, the prediction interval with longer length can be segmented into the prediction intervals with shorter length, so that the load prediction of the prediction intervals with different time lengths can be realized by utilizing a deep learning load prediction method, and the accuracy of the prediction result can be increased by adjusting the fine granularity of the segmented sub-prediction intervals.
On the basis of the above-described embodiment:
as a preferred embodiment, the setting method of the group of second preset moving time windows includes:
the initial length of each second preset moving time window is the same, the second preset moving time window is utilized to divide the historical load data into a group of second sub-load training data, after a preset step length delta T is added to the last length of the second preset moving time window, the historical load data are divided according to the current length of the added second preset moving time window until a plurality of groups of second sub-load training data are obtained; or
And setting the length of the second preset moving time window according to the change rate of the historical load data, and dividing the historical load data according to the increased current length of the second preset moving time window after increasing the previous length of the second preset moving time window by a preset step length delta T until a plurality of groups of second sub-load training data are obtained.
Specifically, the historical load data divided by each second preset moving time window is clustered, and a more optimal length of the first preset time window can be selected by referring to a plurality of clustering results, so that the trained deep learning prediction model is optimized, and a more accurate prediction result is obtained.
And if the length of each second preset moving time window plus the preset step length delta T is large enough, the length of the second preset moving time window is not increased any more, and the preset step length delta T is determined according to actual conditions.
As a preferred embodiment, the first preset moving time window is moved in the following manner:
when the current first preset moving time window executes the next movement, if the preset moving step length delta T meets the condition that delta T is equal to Twin,iIf the current first preset moving time window is within the preset time window, the starting position of the next first preset moving time window is connected with the ending position of the current first preset moving time window, wherein Twin,iThe length of the current first preset moving time window is obtained; or
When the current first preset moving time window executes the next step of movement, if the preset moving step length delta t meets delta t<Twin,iIf the starting position of the next first preset moving time window is overlapped with the current first preset moving time window, Twin,iThe length of the current first preset moving time window is obtained;
the moving mode of the second preset moving time window is as follows:
when the current second preset moving time window executes the next step of moving, if the preset moving step length delta T meets the condition that delta T is equal to Twin,iIf the starting position of the next second preset moving time window is linked with the ending position of the current second preset moving time window, wherein Twin,iThe length of the current second preset moving time window; or
When the current second preset moving time window moves in the next step, if the preset moving step length delta t meets delta t<Twin,iThen the starting position of the next second preset moving time window and the current second preset moving time window Twin,iThere is an overlap, wherein Twin,iThe length of the current second preset moving time window.
Specifically, the moving step length delta T satisfies the condition that delta T is less than or equal to Twin,iTo ensure that the historical load data can be completely segmented without omission, when the moving step length delta t meets delta t<Twin,iIn time, the prediction interval and the historical load data can be divided into more fine granularity, so that the deep learning load prediction method is suitable for load prediction of prediction intervals with different durations.
As a preferred embodiment, before training the deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and obtaining the load prediction value of each sub-prediction interval, the deep learning load prediction method further includes:
preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data;
normalizing the preprocessed multiple first sub-load training data;
the training of the deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and the obtaining of the load prediction value of each sub-prediction interval specifically include:
preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data;
normalizing the preprocessed multiple first sub-load training data;
and training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data after normalization processing, and obtaining a load prediction value of each sub-prediction interval.
Specifically, after the historical load data corresponding to the prediction interval and having the same length is divided into a plurality of sub-load training data by using the first preset moving time window, due to the problems of improper manual operation, equipment aging and the like in the sampling process, missing values and abnormal data may exist in the historical load data, namely missing values and abnormal data may exist in each sub-load training data, and therefore the sub-load training data corresponding to each sub-prediction interval needs to be preprocessed first, and the abnormal data is corrected or the missing data is filled;
because the deep learning prediction algorithm is sensitive to the data scale, the preprocessed data is normalized:
Figure BDA0002098827820000111
Figure BDA0002098827820000112
in the formula: x' represents a normalized data matrix; t is tkRepresents the length of the kth sub-data set; x is the number ofiAn ith row vector representing X'; minxiAnd maxxiDenotes xiMinimum and maximum values of.
Specifically, the preprocessed data are mapped to the range of 0-1, so that the data are processed more conveniently and rapidly.
Of course, in addition to performing normalization processing on the divided sub-load training data, other forms of processing may be performed, and the embodiment of the present invention is not limited herein.
As a preferred embodiment, the deep learning prediction model is a deep LSTM prediction model constructed by using a Keras deep learning framework, and the training process of the deep LSTM prediction model is as follows:
acquiring first sub-load training data of an ith sub-prediction interval;
searching the LSTM hyper-parameter by using a grid;
and establishing a depth LSTM prediction model by using the first sub-load training data and the LSTM hyper-parameter.
Specifically, Keras is a high-level neural network API (application Program interface) and is written by pure Python (a computer programming language), and the Keras aims to support rapid experiments, can quickly convert your ideas into results, and has the advantages of being user-friendly, modular and easy to expand.
The LSTM is a deep Long Short Term Memory Network (Long Short Term Memory), which is an improved RNN (Recurrent Neural Network) model, and is more suitable for processing and predicting important events with relatively Long intervals and delays in a time sequence, compared with a standard RNN.
As a preferred embodiment, the algorithm of the deep learning prediction model is a deep self-coding neural network.
Of course, in addition to the LSTM and the deep self-coding neural network, the algorithm of the deep learning prediction model may also be replaced by another type of neural network, and the embodiment of the present invention is not limited herein.
Referring to fig. 2, fig. 2 is a specific exemplary diagram of a deep learning load prediction method provided by the present invention, and the specific exemplary steps are as follows:
selecting input load data:
including determining a section length T for the load forecast, selecting different historical load data for different section lengths (short-term or medium-long term load forecast) and load resolutions (i.e., intervals between adjacent load data points) required for the load forecast. For example, load data that requires a prediction for the future day, historical load data for the same number of weeks may be selected.
Moving time window segmentation dataset:
let the length of the prediction interval be T and the length of the ith second predetermined moving time window be Twin,i. And segmenting the historical load data in the range contained by each second preset moving time window and independently storing and recording the segmented historical load data as second sub-load training data in the moving process by designing a second predicted moving time window and moving the second preset moving time window from the starting end point of the historical load data interval T to the ending end point of the historical load data interval T. At the moment, the data set is segmented by using a second preset moving time window as a preliminary segmentation, the second preset moving time window divides the historical training data with the same length as the prediction interval into a plurality of second sub-load training data, and the segmented second sub-load training data are used as the input of a clustering algorithm so as to search the first preset moving time window by using the clustering algorithm.
Determining the optimal segmentation by a clustering algorithm:
the length of the time window is neither too short nor too long to set, and the length of the first preset moving time window is determined by a clustering algorithm.
Firstly, a plurality of second sub-load training data segmented by using a second preset moving time window are used as input of a clustering algorithm, and an average profile coefficient and interval statistics are used as measurement standards to find out the optimal clustering number;
secondly, selecting the minimum value of the cluster number and the maximum moving time window length corresponding to the minimum value.
And finally, taking the moving time window with the maximum length corresponding to the minimum value of the cluster number as the first preset moving time window.
When the minimum cluster numbers selected by the two kinds of measuring standards are different, the cluster number with relatively smaller cluster number is preferentially selected as the optimal cluster number so as to determine the proper first preset moving time window length.
Normalization of optimal segmentation data:
after dividing historical load data with the same length corresponding to the prediction interval into a plurality of first sub-load training data by using a proper first moving time window, preprocessing the first sub-load training data corresponding to each sub-prediction interval, and correcting abnormal data or filling missing data;
because the deep learning prediction algorithm is sensitive to the data scale, the preprocessed data are normalized, and the normalized data are between 0 and 1, so that the data processing is more convenient.
After the data are normalized, other feature data can be added on the basis of each first sub-load training data to improve the prediction precision, and the input quantity and the output quantity of the deep learning network can be determined according to the sampling frequency, the other feature data and each first sub-load training data;
modeling and predicting of sub-prediction intervals:
after preprocessing and normalizing each subload training data, establishing a prediction model for a corresponding subprediction interval, specifically:
firstly, determining original data of an ith sub-prediction interval;
secondly, searching LSTM hyper-parameters by using grids;
then establishing a prediction model according to the LSTM hyperparameter;
and finally, obtaining a load predicted value of the ith subinterval according to the prediction model.
Wherein, a Keras deep learning framework is utilized to construct a deep LSTM model. When the LSTM model is established, a plurality of hyper-parameters in the model, such as the number of hidden states, loss functions, the number of model iterations and the like, need to be determined. And searching the optimal values of a plurality of hyper-parameters by using the grids, and further establishing a depth LSTM prediction model.
And integrating the predicted values of the sub-prediction intervals, performing inverse normalization, and outputting predicted load data.
Specifically, assuming that there are k sub-prediction intervals, it is necessary to determine whether the sequence number i of the ith sub-prediction interval is greater than k, and when i > k, the prediction values of the sub-prediction intervals are integrated to ensure that the prediction value of each sub-prediction interval is counted.
As can be seen, in the deep learning load prediction specific example, according to the lengths of different prediction intervals, the fine granularity of the segmentation interval can be changed, for example, the prediction interval is relatively long, and the fine granularity of the total prediction interval can be highly divided into a plurality of detailed sub-prediction intervals; therefore, the method is suitable for load prediction of different durations, can unify prediction algorithms of different prediction lengths, reduces the complexity of the construction of a load prediction model, and enhances the universality of the load prediction model; due to the fact that the clustering algorithm is used for determining the proper first preset moving time window, and the first sub-load training data are preprocessed and normalized before the deep learning prediction model of the first sub-prediction interval is trained by the first sub-load training data, the prediction progress and the data processing speed are further improved.
As a preferred embodiment, the deep learning load prediction method is verified by taking the example of predicting the load of 1 day in the future, wherein the resolution of the historical load data is 15 minutes. Assuming that the prediction interval length is T and the actual load data is xtT is 1,2, …, T; the load data predicted by the deep learning model is recorded as
Figure BDA0002098827820000143
T ═ 1,2, …, T; data for one day is predicted, wherein T is 96. The accuracy of the prediction is measured by using the following 3 evaluation indexes:
(1) root mean square error
Figure BDA0002098827820000141
(2) Mean absolute error
Figure BDA0002098827820000142
(3) Accuracy of prediction
Figure BDA0002098827820000151
According to the steps, firstly, the optimal time window length is found out by adopting a clustering algorithm. The optimal number of clusters in this example is 4, and the sub-prediction intervals are divided as shown in fig. 3. And then, respectively training the hyper-parameters of the deep neural network by using the sub-load training data corresponding to the sub-prediction intervals by adopting a grid search method. And finally, predicting load data through the trained LSTM model based on historical load data, comparing the algorithm with a traditional BP neural network, and obtaining a prediction result graph of the algorithm and the traditional BP neural network as shown in the attached figure 4.
The errors of the algorithm designed by the invention are respectively as follows: error1=9271.51,error275.34; the prediction error of the traditional BP neural network is as follows: error1=26192.09,error2128.77; compared with the traditional neural network, the error of the algorithm is respectively reduced as follows: delta error1=16920.58,Δerror253.43. In addition, the prediction accuracy of the LSTM algorithm is 2.19%, while the prediction accuracy of the BP neural network is 5.15%, which is improved by 2.96%.
The method for predicting the load through the deep learning load is applied to the power system, so that a solid reference can be provided for economic dispatching of the power system, and a reliable basis is provided for formulating a load dispatching strategy and setting regional time-of-use electricity price.
In addition, the deep learning load prediction method can also be applied to prediction of gas consumption in residential areas and the like, and the embodiment of the invention is not limited thereto.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 5 is a schematic structural diagram of a deep learning load prediction apparatus according to an embodiment of the present invention, and referring to fig. 5, the deep learning load prediction apparatus may include a segmentation module 50, a prediction module 51, and an output module 52.
The segmentation module 50 is configured to divide the prediction interval into a plurality of first sub-prediction intervals, and divide the historical load data into a plurality of first sub-load training data; the prediction module 51 is configured to train a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data, and obtain a load prediction value of each sub-prediction interval; the output module 52 is configured to determine a final predicted value according to the load predicted values of the sub-prediction sections.
For the introduction of the deep learning load prediction apparatus in the embodiment of the present invention, please refer to the foregoing deep learning load prediction method embodiment, and the detailed description of the embodiment of the present invention is omitted here.
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various deep learning load prediction method embodiments described above, such as the steps shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 50 to 52 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the load prediction device 6. For example, the computer program 62 may be divided into a segmentation module, a prediction module, and an output module, each of which functions specifically as follows:
the segmentation module is used for dividing the prediction interval into a plurality of first sub-prediction intervals and dividing the historical load data into a plurality of first sub-load training data; the prediction module is used for training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and obtaining a load prediction value of each sub-prediction interval; and the output module is used for determining a final predicted value according to the load predicted value of each sub-prediction interval.
The load prediction device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The load prediction device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the load prediction device 6. The memory 61 may also be an external storage device of the load prediction apparatus 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the load prediction apparatus 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the load prediction apparatus 6. The memory 61 is used for storing the computer programs and other programs and data required by the load prediction device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A deep learning load prediction method is characterized by comprising the following steps:
dividing a prediction interval into a plurality of first sub-prediction intervals by using a first preset moving time window, and dividing historical load data into a plurality of first sub-load training data, wherein the length of the historical load data corresponds to the length of the prediction interval;
preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data;
normalizing the preprocessed multiple first sub-load training data;
training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data, and obtaining a load prediction value of each sub-prediction interval;
determining a final predicted value according to the predicted value of each sub-prediction interval;
the first preset moving time window is obtained by a clustering algorithm, and the process is as follows:
randomly acquiring a group of second preset moving time windows, and dividing the historical load data into a plurality of groups of second sub-load training data by using the group of second preset time windows, wherein the plurality of groups of second sub-load training data are used as the input of a clustering algorithm;
finding the minimum clustering number according to the measuring standard of the clustering algorithm;
selecting the minimum value of the cluster number, and taking the moving time window with the maximum length corresponding to the minimum value as the first preset moving time window;
the training of the deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and the obtaining of the load prediction value of each sub-prediction interval specifically include:
preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data;
normalizing the preprocessed multiple first sub-load training data;
training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data after normalization processing, and obtaining a load prediction value of each sub-prediction interval;
the deep learning prediction model is a deep LSTM prediction model constructed by utilizing a Keras deep learning framework, and the training process of the deep learning prediction model is as follows:
acquiring first sub-load training data of an ith sub-prediction interval;
searching the LSTM hyper-parameter by using a grid;
and establishing a depth LSTM prediction model by using the first sub-load training data and the LSTM hyper-parameter.
2. The deep learning load prediction method of claim 1, wherein mean profile coefficients and interval statistics are used as metrics for the clustering algorithm.
3. The deep learning load prediction method of claim 1, wherein the second predetermined set of moving time windows are set by:
the initial length of each second preset moving time window is the same, the second preset moving time window is utilized to divide the historical load data into a group of second sub-load training data, and after a preset step length delta T is added to the last length of the second preset moving time window, the historical load data are divided according to the current length of the added second preset moving time window until a plurality of groups of second sub-load training data are obtained; or
And setting the length of the second preset moving time window according to the change rate of the historical load data, and dividing the historical load data according to the increased current length of the second preset moving time window after increasing the previous length of the second preset moving time window by a preset step length delta T until a plurality of groups of second sub-load training data are obtained.
4. The deep learning load prediction method according to any one of claims 1 to 3, wherein the first predetermined moving time window is moved in a manner that:
when the current first preset moving time window executes the next movement, if the preset moving step length delta T meets the condition that delta T is equal to Twin,iIf the current first preset moving time window is within the preset time window, the starting position of the next first preset moving time window is connected with the ending position of the current first preset moving time window, wherein Twin,iThe length of the current first preset moving time window is obtained; or
When the current first preset moving time window executes the next step of movement, if the preset moving step length delta t meets delta t<Twin,iThen the starting position of the next first preset moving time window and the current first preset moving time window Twin,iThere is an overlap, wherein Twin,iThe length of the current first preset moving time window is obtained;
the moving mode of the second preset moving time window is as follows:
is at present the firstWhen the two preset moving time windows execute the next step of movement, if the preset moving step length delta T meets the condition that delta T is equal to Twin,iIf the starting position of the next second preset moving time window is linked with the ending position of the current second preset moving time window, wherein Twin,iThe length of the current second preset moving time window; or
When the current second preset moving time window moves in the next step, if the preset moving step length delta t meets delta t<Twin,iThen the starting position of the next second preset moving time window and the current second preset moving time window Twin,iThere is an overlap, wherein Twin,iThe length of the current second preset moving time window.
5. The deep learning load prediction method of any one of claims 1 to 3, wherein the algorithm of the deep learning prediction model is a deep self-coding neural network.
6. A deep learning load prediction apparatus, comprising:
the system comprises a segmentation module, a prediction module and a load training module, wherein the segmentation module is used for dividing a prediction interval into a plurality of first sub-prediction intervals by utilizing a first preset moving time window and dividing historical load data into a plurality of first sub-load training data; preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data; normalizing the preprocessed multiple first sub-load training data;
the prediction module is used for training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data and obtaining a load prediction value of each sub-prediction interval;
the determining module is used for determining a final predicted value according to the load predicted values of the sub-prediction intervals;
the slitting module is specifically configured to:
randomly acquiring a group of second preset moving time windows, and dividing the historical load data into a plurality of groups of second sub-load training data by using the group of second preset time windows, wherein the plurality of groups of second sub-load training data are used as the input of a clustering algorithm;
finding the minimum clustering number according to the measuring standard of the clustering algorithm;
selecting the minimum value of the cluster number, and taking the moving time window with the maximum length corresponding to the minimum value as the first preset moving time window;
the prediction module is specifically configured to:
preprocessing the plurality of first sub-load training data, and correcting abnormal data in the plurality of first sub-load training data or filling missing data in the plurality of first sub-load training data;
normalizing the preprocessed multiple first sub-load training data;
training a deep learning prediction model of the plurality of first sub-prediction intervals by using the plurality of first sub-load training data after normalization processing, and obtaining a load prediction value of each sub-prediction interval;
the deep learning prediction model is a deep LSTM prediction model constructed by utilizing a Keras deep learning framework, and the prediction module is further specifically used for:
acquiring first sub-load training data of an ith sub-prediction interval;
searching the LSTM hyper-parameter by using a grid;
and establishing a depth LSTM prediction model by using the first sub-load training data and the LSTM hyper-parameter.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the deep learning load prediction method according to any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the deep learning load prediction method according to any one of claims 1 to 5.
CN201910527965.9A 2019-06-18 2019-06-18 Deep learning load prediction method and device and terminal equipment Active CN110232483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910527965.9A CN110232483B (en) 2019-06-18 2019-06-18 Deep learning load prediction method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910527965.9A CN110232483B (en) 2019-06-18 2019-06-18 Deep learning load prediction method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110232483A CN110232483A (en) 2019-09-13
CN110232483B true CN110232483B (en) 2021-05-04

Family

ID=67859762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910527965.9A Active CN110232483B (en) 2019-06-18 2019-06-18 Deep learning load prediction method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN110232483B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126565A (en) * 2019-11-28 2020-05-08 广东电网有限责任公司 Method and device for predicting block load density index based on deep learning
CN112862143A (en) * 2019-11-28 2021-05-28 新奥数能科技有限公司 Load and price prediction method
CN110701732B (en) * 2019-12-10 2020-06-16 南昌掘策数据服务有限公司 Energy consumption data analysis method and system and energy saving method and system of central air conditioner
CN111419249B (en) * 2020-03-26 2023-04-25 心图熵动科技(苏州)有限责任公司 Depression prediction model generation method and prediction system
CN111694830A (en) * 2020-06-12 2020-09-22 复旦大学 Missing data completion method based on deep ensemble learning
CN112330010A (en) * 2020-11-03 2021-02-05 长安大学 Power consumer load interval prediction method based on deep learning
CN112734106A (en) * 2021-01-08 2021-04-30 深圳市国电科技通信有限公司 Method and device for predicting energy load
CN115174237B (en) * 2022-07-08 2023-04-18 河北科技大学 Method and device for detecting malicious traffic of Internet of things system and electronic equipment
CN116995673B (en) * 2023-09-26 2024-02-20 宁德时代新能源科技股份有限公司 Power load prediction method, power load prediction model training method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606013A (en) * 2013-12-06 2014-02-26 国家电网公司 User annual power consumption prediction method based on support vector machine
CN103901305A (en) * 2014-03-31 2014-07-02 广东电网公司电力科学研究院 Online early warning method for power equipment
CN108009673A (en) * 2017-11-24 2018-05-08 国网北京市电力公司 Novel load Forecasting Methodology and device based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10354345B2 (en) * 2012-01-23 2019-07-16 Whisker Labs, Inc. Optimizing and controlling the energy consumption of a building

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606013A (en) * 2013-12-06 2014-02-26 国家电网公司 User annual power consumption prediction method based on support vector machine
CN103901305A (en) * 2014-03-31 2014-07-02 广东电网公司电力科学研究院 Online early warning method for power equipment
CN108009673A (en) * 2017-11-24 2018-05-08 国网北京市电力公司 Novel load Forecasting Methodology and device based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"典型发达和发展中国家电力负荷预测分析";王颖;《中国优秀硕士学位论文全文数据库经济与管理科学辑》;20140115(第1期);第10-150页 *
The Design of Short-term Load Forecast Systems Based on the Theory of Complex;Sun Da-shuai,Ma Li-xin,Wang Shou-zheng;《2010 International Conference on Intelligent System Design and Engineering Application》;20110411;第1-4页 *

Also Published As

Publication number Publication date
CN110232483A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN110232483B (en) Deep learning load prediction method and device and terminal equipment
CN109727446A (en) A kind of identification and processing method of electricity consumption data exceptional value
CN110059845B (en) Metering device clock error trend prediction method based on time sequence evolution gene model
Zhang et al. Wind speed prediction research considering wind speed ramp and residual distribution
CN114912720A (en) Memory network-based power load prediction method, device, terminal and storage medium
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
CN112907064B (en) Electric quantity prediction method and device based on adaptive window, storage medium and terminal
Li et al. Do simpler statistical methods perform better in multivariate long sequence time-series forecasting?
CN112819260A (en) Data processing system for predicting flight delay state
CN112598181A (en) Load prediction method, device, equipment and storage medium
CN115965160B (en) Data center energy consumption prediction method and device, storage medium and electronic equipment
CN115935212A (en) Adjustable load clustering method and system based on longitudinal trend prediction
Xin et al. Short-term load forecasting for electric vehicle charging stations based on time series distance measuring
CN115796338A (en) Photovoltaic power generation power prediction model construction and photovoltaic power generation power prediction method
CN113933915B (en) Short-term and temporary extrapolation forecasting method based on space-time disturbance information interaction integration nesting
Majidpour Time series prediction for electric vehicle charging load and solar power generation in the context of smart grid
Hou et al. Uncertainty reduction in power generation forecast using coupled wavelet-ARIMA
Phan et al. A study on missing data imputation methods for improving hourly solar dataset
CN111368257B (en) Analysis and prediction method and device for coal-to-electricity load characteristics
Liang et al. PM2. 5 concentration forecasting based on data preprocessing strategy and LSTM neural network
CN103678953A (en) Biological fermentation yield on-line forecasting method based on Bayes combination neural network
CN114282657A (en) Market data long-term prediction model training method, device, equipment and storage medium
CN107704723A (en) A kind of notable Variable Selection based on Slope correlation
Liu et al. Multivariate long-time series traffic passenger flow prediction using causal convolutional sparse self-attention MTS-Informer
CN112801415A (en) Ultra-short-term load prediction method and system based on Markov chain distribution model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant