CN112883649B - Power load prediction method, system, computer equipment and storage medium - Google Patents
Power load prediction method, system, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112883649B CN112883649B CN202110219612.XA CN202110219612A CN112883649B CN 112883649 B CN112883649 B CN 112883649B CN 202110219612 A CN202110219612 A CN 202110219612A CN 112883649 B CN112883649 B CN 112883649B
- Authority
- CN
- China
- Prior art keywords
- component
- prediction result
- data
- residual component
- sample set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 51
- 230000005611 electricity Effects 0.000 claims abstract description 30
- 238000013528 artificial neural network Methods 0.000 claims abstract description 27
- 239000011159 matrix material Substances 0.000 claims description 58
- 239000013598 vector Substances 0.000 claims description 53
- 230000006870 function Effects 0.000 claims description 24
- 238000011176 pooling Methods 0.000 claims description 23
- 238000010606 normalization Methods 0.000 claims description 22
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000015654 memory Effects 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 7
- 238000012417 linear regression Methods 0.000 claims description 5
- 230000002265 prevention Effects 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 3
- 238000010248 power generation Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009131 signaling function Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/08—Probabilistic or stochastic CAD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
- G06F2218/06—Denoising by applying a scale-space analysis, e.g. using wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Strategic Management (AREA)
- Computational Linguistics (AREA)
- Human Resources & Organizations (AREA)
- Biomedical Technology (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Evolutionary Biology (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Medical Informatics (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
Abstract
The invention discloses a power load prediction method, a system, computer equipment and a storage medium, wherein the method comprises the following steps: decomposing and restoring the historical electricity consumption data to obtain an approximate component and a residual component, and respectively performing characteristic processing to obtain an approximate component sample set and a residual component sample set; inputting the approximate component sample set into a neural network to obtain a prediction result of the approximate component; inputting the residual component sample set into a neural network to obtain a prediction result of the residual component; and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain a power load prediction result. According to the invention, the approximate component prediction result and the residual component prediction result are obtained, so that the power load prediction result is counted, the accuracy of power load prediction is improved, and the power load prediction precision is further improved.
Description
Technical Field
The present invention relates to the field of power load prediction technologies, and in particular, to a power load prediction method, a power load prediction system, a computer device, and a storage medium.
Background
As the competition in the power market becomes more and more intense, the demand of users is continuously increased, and the safe and economic operation of the power grid becomes vital. The power supply of the power generation department needs to follow the actual load consumption situation to keep the power generation department stable and efficient, otherwise the safety and stability of the whole power system are endangered. Therefore, short-term prediction of the power load is necessary, so that safe operation of the power grid can be effectively guaranteed, the power generation cost can be reduced, the user demands can be met, and the social and economic benefits can be improved. But the electricity consumption has obvious periodic characteristics and a plurality of random influence factors, so that a large amount of noise is formed, and the prediction speed and the prediction accuracy of the power load prediction are influenced.
Disclosure of Invention
The embodiment of the invention provides a power load prediction method, a power load prediction system, computer equipment and a storage medium, which aim to solve the problems of low prediction speed and low prediction precision of power load prediction caused by external random factors in the prior art.
In a first aspect, an embodiment of the present invention provides a power load prediction method, including:
acquiring historical electricity consumption data, decomposing and restoring the historical electricity consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
performing feature processing on the approximate component and the residual component respectively to obtain an approximate component sample set and a residual component sample set;
inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; extracting features of sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component;
and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
In a second aspect, an embodiment of the present invention provides a power load prediction system, including:
the data decomposition and restoration module is used for obtaining historical electricity consumption data, decomposing and restoring the historical electricity consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
the characteristic processing module is used for respectively carrying out characteristic processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
the approximate component prediction result acquisition module is used for inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
the residual component prediction result acquisition module is used for converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix and acquiring a residual component image sample set according to the spectrogram; extracting features of sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component;
and the power load prediction result acquisition module is used for counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
In a third aspect, an embodiment of the present invention further provides a computer apparatus, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the power load prediction method described in the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, which when executed by a processor, causes the processor to perform the power load prediction method described in the first aspect above.
The embodiment of the invention provides a power load prediction method, a power load prediction system, computer equipment and a storage medium. The method comprises the steps of obtaining historical electricity consumption data, decomposing and restoring the historical electricity consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component; performing feature processing on the approximate component and the residual component respectively to obtain an approximate component sample set and a residual component sample set; inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component; converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; extracting features of sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component; and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result. According to the embodiment of the invention, the approximate component prediction result and the residual component prediction result are obtained, so that the power load prediction result is counted, the influence of noise is not required to be considered, the prediction speed of power load prediction is improved, the accuracy of power load prediction is improved, and the power load prediction precision is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a power load prediction method according to an embodiment of the present invention;
FIG. 2 is a comparison chart of a regional prediction result of a power load prediction method according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a power load prediction system provided by an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a flowchart of a power load prediction method according to an embodiment of the invention, and the method includes steps S101 to S105.
S101, acquiring historical electricity consumption data, decomposing and restoring the historical electricity consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
in this step, historical electricity consumption data in a period of time is obtained, the historical electricity consumption data is decomposed by wavelet transformation, and then the decomposed historical electricity consumption data is reduced by wavelet transformation to obtain a low-frequency subsequence (i.e., an approximate component) and a high-frequency subsequence (i.e., a residual component).
S102, respectively carrying out feature processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
in the step, the approximate components are subjected to feature processing to obtain an approximate component sample set, and then the residual components are subjected to feature processing to obtain a residual component sample set.
In one embodiment, the step S102 includes:
acquiring historical electricity consumption data before a preset time period, and adding corresponding exogenous variables by combining the prediction tasks of the approximate components to obtain an approximate component sample set;
and acquiring historical electricity consumption data before a preset time period, and combining the prediction tasks of the residual components to obtain a residual component sample set.
In this embodiment, according to the historical electricity consumption data before the predetermined time period, in combination with the task of predicting the approximate component, an exogenous variable corresponding to the time is added to the predicted task, so as to obtain an approximate component sample set; the residual component sample set also needs to be obtained according to historical electricity consumption data before a preset time period in combination with a prediction task of the residual component, and is different from the approximate component sample set in that an exogenous variable does not need to be added to the residual component sample set.
And performing feature processing on the approximate components, wherein short-time power prediction generally needs to be advanced by t times, the power consumption of k times is predicted, and a prediction target is the approximate component data of the k times. Firstly, historical electricity consumption data before t+k moments are obtained, and the influence factors which are easy to obtain and have the largest correlation are tidied by combining with the prediction task of the approximate components, exogenous variables with corresponding time are added for the influence factors, and finally, an approximate component sample set is obtained. The exogenous variables include: air temperature, time period, rain and snow grade, sunny day, heating area, working day, primordial denier, year, qingming, five-one, end day, mid-autumn, national day, holiday, etc. And carrying out feature processing on the residual component, namely advancing t moments and predicting the electricity consumption of k moments. And acquiring historical electricity consumption data before t+k moments, and acquiring a residual component sample set by combining with a residual component prediction task.
As shown in fig. 2, k=5h, t=5h is preset. When the electricity consumption of 30-35h is required to be predicted, historical electricity consumption data of 0-20h are obtained 10h in advance, and then corresponding exogenous variables are added by combining with the prediction task of the approximate components, so that an approximate component sample set of 30-35h is obtained.
S103, inputting sample data in the approximate component sample set to a full connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
in this step, the sample data in the approximate component sample set needs to be input into a cyclic neural network (in this embodiment, a GRU neural network) for learning, and before the sample data is input into the cyclic neural network, the sample data is first vector-adjusted by using a full-connection layer, that is, the sample is sent into the full-connection layer in the form of feature vectors under different step sizes, so as to enhance the self-adjusting capability of the network, and the input vector is adjusted to be a feature that is easy to identify and utilize by the network. And inputting the adjusted sample data into a pre-constructed GRU neural network for learning to obtain approximate component characteristic data, and finally, carrying out normalization processing on the approximate component characteristic data to obtain a final approximate component prediction result. The Softmax layer is a normalized exponential function, and the full connectivity layer, GUR neural network, and Softmax layer used in this step can all be constructed from a toolkit in python (a cross-platform computer programming language).
In an embodiment, the inputting the adjusted sample data into the GRU neural network for learning to obtain the approximate component feature data includes:
inputting the adjusted sample data into a pre-constructed GRU neural network;
updating the adjusted sample data by using an updating gate in the GRU neural network to obtain updated data, and resetting the adjusted sample data by using a resetting gate in the GRU neural network to obtain reset data;
and learning the adjusted sample data by using the updated data and the reset data to obtain approximate component characteristic data.
In this embodiment, the pre-constructed GRU neural network is used to process the sample data after vector adjustment by the full connection layer, the update gate is used to update the sample data, the reset gate is used to reset the updated sample data, and finally the update data and the reset data are used to learn the sample data after vector adjustment, so as to obtain the approximate component characteristic data. The GRU is one kind of RNN, and like LSTM, has also been proposed in order to solve the gradient scheduling problem in long-term memory and back propagation, compares LSTM, and GRU inside has one gate control less for the parameter is than LSTM, but uses GRU can reach equivalent effect, but is nevertheless easier to train in contrast, can improve training efficiency to a great extent.
In an embodiment, the inputting the feature data to a Softmax layer for normalization processing, and obtaining the prediction result of the approximate component includes:
inputting the characteristic data into a Softmax layer for normalization processing to obtain a positive infinity prediction result and a negative infinity prediction result;
converting the positive infinity predicted result and the negative infinity predicted result into probabilities, and taking the predicted result with the highest probability as the predicted result of the approximate component.
In this embodiment, the feature data input to the Softmax layer is normalized, and a plurality of vector values are output, each of which represents the probability of a prediction result, and the prediction result with the largest probability value is selected as the prediction result of the approximation component. The main function of the Softmax layer is to perform normalization processes, such as: the classification of the pictures in the data input to the Softmax layer for processing is one hundred, then after the Softmax layer is processed, a vector with one hundred dimensions is output, the first value in the vector is the probability value of the current picture belonging to the first class, the second value in the vector is the probability value of the current picture belonging to the second class, and the like, the first hundred values in the vector are the probability values of the current picture belonging to the first hundred classes, and the sum of all classified vectors is 1.
S104, converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; extracting features of sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component;
in the step, the residual component sample set is converted into a time-frequency matrix by utilizing short-time Fourier transform, a spectrogram of the time-frequency matrix is obtained, a residual component image sample set is formed, then residual component characteristic data of the residual component image sample set are extracted by utilizing a pre-constructed DPN network, and finally the characteristic data are learned by utilizing a pre-constructed convolutional neural network, so that a prediction result of the residual component is obtained.
In an embodiment, the converting the residual component sample set into a time-frequency matrix and obtaining a spectrogram of the time-frequency matrix includes:
converting the residual component sample set into a time-frequency matrix through discrete short-time Fourier transform, and acquiring a spectrogram of the time-frequency matrix;
the discrete short-time fourier transform is performed by the following formula:
where z () is the source signal, g () is the window function, m is the window length, T is the sampling frequency,is a representation of a complex number, which is, k is E (- ≡infinity), + -infinity), ++>
In this embodiment, the short-time fourier transform is a time-frequency analysis method, which represents the signal characteristics at a certain moment by a signal segment in a time window. In the short-time Fourier transform process, the window length determines the time resolution and the frequency resolution of the spectrogram, the longer the window length is, the longer the intercepted signal is, the longer the signal is, the higher the frequency resolution is after the short-time Fourier transform is, and the worse the time resolution is; conversely, the shorter the window length, the shorter the truncated signal, the worse the frequency resolution and the better the time resolution. In short, the short-time fourier transform is to multiply a source signal function with a window function, then perform one-dimensional fourier transform, obtain a series of fourier transform results by sliding the window function, and arrange the results to obtain a two-dimensional matrix.
In the processing of the set of residual component samples, it is necessary to replace the continuous-time fourier transform of the continuous-time signal with a discrete fourier transform of the signal within each window, resulting in a discrete short-time fourier transform both in time and frequency, for implementation by a digital computer.
In an embodiment, the converting the residual component sample set into a time-frequency matrix by discrete short-time fourier transform, and obtaining a spectrogram of the time-frequency matrix includes:
acquiring specified parameters in the residual component sample set; the specified parameters include: source signal, window function, window length, overlap point number, sampling frequency and fourier point number;
calculating the signal length of the source signal, and calculating the sliding times of a window function according to the signal length, the window length and the number of overlapping points;
the source signal corresponding to each window function sliding is expressed as a column, the value of each column is determined, and a sliding matrix of the designated row number and column number is obtained;
converting a window function into a column vector, and expanding the column vector into a vector matrix with a designated column number;
and performing point multiplication operation on the sliding matrix and the vector matrix, performing Fourier transformation on a point multiplication result to obtain a time-frequency matrix, and acquiring a spectrogram of the time-frequency matrix.
In this embodiment, first, specified parameters are determined, mainly including: the method comprises the steps of source signals, window functions, window lengths, overlapping points, sampling frequency and Fourier points, wherein the Fourier points are mainly used in the Fourier transform process, and when the signal lengths are smaller than the Fourier points, the system automatically performs zero padding and then performs Fourier transform; after the specified parameters are determined, calculating the signal length of the source signal, and according to the signal length, calculating the sliding times of the window function by combining the window length and the number of overlapping points in the specified parameters; then, the source signal corresponding to each window function sliding is expressed as a column, a sliding matrix with a specified row number and a specified column number is obtained, then the window function is converted into a column vector, and the column vector is expanded to a vector matrix with the same column number as the sliding matrix; and performing point multiplication operation on the sliding matrix and the vector matrix, performing fast Fourier transform on a point multiplication result to obtain a time-frequency matrix, and outputting a spectrogram according to the time-frequency matrix.
In an embodiment, the feature extracting the sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component, including:
extracting features of sample data in the residual component image sample set by using a DPN (data processing network), respectively inputting the extracted features into a maximum pooling layer and an average pooling layer for pooling treatment, summarizing the features subjected to the maximum pooling treatment and the features subjected to the average pooling treatment, inputting the summarized features into a flattening layer, and outputting feature vectors;
and carrying out normalization processing and overfitting prevention processing on the feature vectors, and inputting the processed feature vectors into a linear full-connection layer for linear regression operation to obtain a prediction result of the residual components.
In this embodiment, the DPN network is used to extract the residual component feature data, the largest pooling layer and the average pooling layer in the Python toolkit are called to pool the residual component feature data, all the features after the pooling are summarized, the pooled features are input to the flat layer (i.e. the flattening layer) in the Python toolkit, so as to unidimensionally input (the summarized features) to obtain feature vectors, then the feature vectors are input to a [ batch norm1d, dropout, linear (ReLU) ] module to perform normalization processing, then input to a [ batch norm1d, dropout, linear ] module to perform fitting prevention processing, and finally perform Linear regression operation. DPN is also called dual path network, which makes the model more fully utilize the features, reduces unnecessary memory copies, and significantly increases training speed.
And S105, counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
In this step, the prediction result of the approximate component and the prediction result of the residual component are added together, and the addition result is the power load prediction result.
Referring to fig. 3, fig. 3 is a schematic block diagram of a power load prediction system according to an embodiment of the present invention, where the power load prediction system 200 includes:
the data decomposition and restoration module 201 is configured to obtain historical power consumption data, decompose and restore the historical power consumption data by using wavelet transformation, and obtain an approximate component and a residual component;
the feature processing module 202 is configured to perform feature processing on the approximate component and the residual component, so as to obtain an approximate component sample set and a residual component sample set;
the approximate component prediction result obtaining module 203 is configured to input sample data in the approximate component sample set to a full connection layer for vector adjustment, input the adjusted sample data to a GRU neural network for learning to obtain approximate component feature data, and input the feature data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
the residual component prediction result obtaining module 204 is configured to convert the residual component sample set into a time-frequency matrix, obtain a spectrogram of the time-frequency matrix, and obtain a residual component image sample set according to the spectrogram; extracting features of sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component;
and the power load prediction result obtaining module 205 is configured to calculate a sum of the prediction result of the approximate component and the prediction result of the residual component, so as to obtain the power load prediction result.
In one embodiment, the feature processing module 202 includes:
the approximate component sample set acquisition module is used for acquiring historical electricity consumption data before a preset time period, and adding corresponding exogenous variables in combination with a prediction task of the approximate component to obtain an approximate component sample set;
and the residual component sample set acquisition module is used for acquiring historical electricity consumption data before a preset time period and combining the prediction task of the residual component to obtain a residual component sample set.
In an embodiment, the approximate component predictor obtaining module 203 includes:
the sample data input module is used for inputting the adjusted sample data into a pre-constructed GRU neural network;
the neural network processing module is used for updating the adjusted sample data by using an updating gate in the GRU neural network to obtain updated data, and resetting the adjusted sample data by using a resetting gate in the GRU neural network to obtain reset data;
and the approximate component characteristic data acquisition module is used for learning the adjusted sample data by utilizing the update data and the reset data to obtain the approximate component characteristic data.
In an embodiment, the approximate component predictor obtaining module 203 includes:
the normalization processing module is used for inputting the characteristic data into a Softmax layer for normalization processing to obtain a positive infinity prediction result and a negative infinity prediction result;
and the approximate component prediction result screening module is used for converting the positive infinity prediction result and the negative infinity prediction result into probabilities and taking the prediction result with the highest probability as the prediction result of the approximate component.
In one embodiment, the residual component predictor retrieval module 204 includes:
the discrete short-time Fourier transform unit is used for converting the residual component sample set into a time-frequency matrix through discrete short-time Fourier transform and obtaining a spectrogram of the time-frequency matrix;
the formula calculation module is used for carrying out discrete short-time Fourier transform through the following formula:
where z () is the source signal, g () is the window function, m is the window length, T is the sampling frequency,is a representation of a complex number, which is, k is E (- ≡infinity), + -infinity), ++>
In an embodiment, the discrete short-time fourier transform unit comprises:
the specified parameter acquisition module is used for acquiring specified parameters in the residual component sample set; the specified parameters include: source signal, window function, window length, overlap point number, sampling frequency and fourier point number;
the window function sliding times calculation module is used for calculating the signal length of the source signal and calculating the sliding times of the window function according to the signal length, the window length and the number of overlapping points;
the sliding matrix acquisition module is used for representing the source signal corresponding to each window function sliding as a column, determining the value of each column and acquiring a sliding matrix with specified row number and column number;
the vector matrix acquisition module is used for converting the window function into a column vector and expanding the column vector into a vector matrix with a designated column number;
and the spectrogram acquisition module is used for performing point multiplication operation on the sliding matrix and the vector matrix, performing fast Fourier transform on a point multiplication result to obtain a time-frequency matrix, and acquiring a spectrogram of the time-frequency matrix.
In one embodiment, the residual component predictor retrieval module 204 includes:
the feature vector acquisition module is used for extracting features of sample data in the residual component image sample set by using a DPN network, inputting the extracted features into a maximum pooling layer and an average pooling layer for pooling treatment respectively, summarizing the features subjected to the maximum pooling treatment and the features subjected to the average pooling treatment, inputting the summarized features into a flattening layer, and outputting feature vectors;
and the characteristic vector processing module is used for carrying out normalization processing and overfitting prevention processing on the characteristic vector, inputting the processed characteristic vector into a linear full-connection layer for linear regression operation, and obtaining a prediction result of the residual component.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the power load prediction method when executing the computer program.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the power load prediction method as described above.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Claims (8)
1. A method of predicting an electrical load, comprising:
acquiring historical electricity consumption data, decomposing and restoring the historical electricity consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
performing feature processing on the approximate component and the residual component respectively to obtain an approximate component sample set and a residual component sample set;
inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
the step of inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component comprises the following steps:
inputting the characteristic data into a Softmax layer for normalization processing to obtain a positive infinity prediction result and a negative infinity prediction result;
converting the positive infinity predicted result and the negative infinity predicted result into probabilities, and taking the predicted result with the maximum probability as the predicted result of the approximate component;
converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; extracting features of sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component;
the feature extraction is performed on the sample data in the residual component image sample set, the extracted residual component feature data is input into a convolutional neural network constructed in advance for learning, and a prediction result of the residual component is obtained, and the method comprises the following steps:
extracting features of sample data in the residual component image sample set by using a DPN (data processing network), respectively inputting the extracted features to a maximum pooling layer and an average pooling layer in a Python tool bag for pooling, summarizing the features subjected to the maximum pooling treatment and the features subjected to the average pooling treatment, inputting the summarized features to a flattening layer in the Python tool bag, unidimensionally unifying the summarized features, and outputting feature vectors;
the feature vector is input to a [ BatchNorm1d, dropout, linear, reLU ] module to perform normalization processing, and is input to a [ BatchNorm1d, dropout, linear ] module to perform fitting prevention processing, and the processed feature vector is input to a Linear full-connection layer to perform Linear regression operation, so that a prediction result of the residual component is obtained;
and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
2. The method according to claim 1, wherein the performing feature processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set, respectively, includes:
acquiring historical electricity consumption data before a preset time period, and adding corresponding exogenous variables by combining the prediction tasks of the approximate components to obtain an approximate component sample set;
and acquiring historical electricity consumption data before a preset time period, and combining the prediction tasks of the residual components to obtain a residual component sample set.
3. The method of claim 1, wherein the inputting the adjusted sample data into the GRU neural network for learning to obtain the approximate component feature data comprises:
inputting the adjusted sample data into a pre-constructed GRU neural network;
updating the adjusted sample data by using an updating gate in the GRU neural network to obtain updated data, and resetting the adjusted sample data by using a resetting gate in the GRU neural network to obtain reset data;
and learning the adjusted sample data by using the updated data and the reset data to obtain approximate component characteristic data.
4. The method of claim 1, wherein converting the set of residual component samples into a time-frequency matrix and obtaining a spectrogram of the time-frequency matrix comprises:
converting the residual component sample set into a time-frequency matrix through discrete short-time Fourier transform, and acquiring a spectrogram of the time-frequency matrix;
the discrete short-time fourier transform is performed by the following formula:
where z () is the source signal, g () is the window function, m is the window length, T is the sampling frequency,is a representation of a complex number, which is, k is E (- ≡infinity), + -infinity), ++>
5. The method of claim 4, wherein said converting the residual component sample set into a time-frequency matrix by discrete short-time fourier transform and obtaining a spectrogram of the time-frequency matrix comprises:
acquiring specified parameters in the residual component sample set; the specified parameters include: source signal, window function, window length, overlap point number, sampling frequency and fourier point number;
calculating the signal length of the source signal, and calculating the sliding times of a window function according to the signal length, the window length and the number of overlapping points;
the source signal corresponding to each window function sliding is expressed as a column, the value of each column is determined, and a sliding matrix of the designated row number and column number is obtained;
converting a window function into a column vector, and expanding the column vector into a vector matrix with a designated column number;
and performing point multiplication operation on the sliding matrix and the vector matrix, performing fast Fourier transform on a point multiplication result to obtain a time-frequency matrix, and acquiring a spectrogram of the time-frequency matrix.
6. An electrical load prediction system, comprising:
the data decomposition and restoration module is used for obtaining historical electricity consumption data, decomposing and restoring the historical electricity consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
the characteristic processing module is used for respectively carrying out characteristic processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
the approximate component prediction result acquisition module is used for inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
wherein the approximate component prediction result obtaining module includes:
the normalization processing module is used for inputting the characteristic data into a Softmax layer for normalization processing to obtain a positive infinity prediction result and a negative infinity prediction result;
the approximate component prediction result screening module is used for converting the positive infinity prediction result and the negative infinity prediction result into probabilities and taking the prediction result with the highest probability as the prediction result of the approximate component;
the residual component prediction result acquisition module is used for converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix and acquiring a residual component image sample set according to the spectrogram; extracting features of sample data in the residual component image sample set, inputting the extracted residual component feature data into a convolutional neural network constructed in advance for learning, and obtaining a prediction result of the residual component;
the residual component prediction result obtaining module includes:
the feature vector acquisition module is used for extracting features of sample data in the residual component image sample set by using a DPN (data processing network), respectively inputting the extracted features to a maximum pooling layer and an average pooling layer in a Python tool package for pooling treatment, summarizing the features subjected to the maximum pooling treatment and the features subjected to the average pooling treatment, inputting the summarized features to a flattening layer in the Python tool package, unifying the summarized features, and outputting feature vectors;
the feature vector processing module is used for carrying out normalization processing on the feature vector input to a [ BatchNorm1d, dropout, linear, reLU ] module and carrying out fitting prevention processing on the feature vector input to a [ BatchNorm1d, dropout, linear ] module, and inputting the processed feature vector to a Linear full-connection layer for carrying out Linear regression operation to obtain a prediction result of the residual component;
and the power load prediction result acquisition module is used for counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the power load prediction method of any one of claims 1 to 5 when the computer program is executed.
8. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the power load prediction method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110219612.XA CN112883649B (en) | 2021-02-26 | 2021-02-26 | Power load prediction method, system, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110219612.XA CN112883649B (en) | 2021-02-26 | 2021-02-26 | Power load prediction method, system, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112883649A CN112883649A (en) | 2021-06-01 |
CN112883649B true CN112883649B (en) | 2023-08-11 |
Family
ID=76054806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110219612.XA Active CN112883649B (en) | 2021-02-26 | 2021-02-26 | Power load prediction method, system, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112883649B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554466B (en) * | 2021-07-26 | 2023-04-28 | 国网四川省电力公司电力科学研究院 | Short-term electricity consumption prediction model construction method, prediction method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295798A (en) * | 2016-08-29 | 2017-01-04 | 江苏省电力试验研究院有限公司 | Empirical mode decomposition and Elman neural network ensemble wind-powered electricity generation Forecasting Methodology |
CN108256697A (en) * | 2018-03-26 | 2018-07-06 | 电子科技大学 | A kind of Forecasting Methodology for power-system short-term load |
CN109214607A (en) * | 2018-11-13 | 2019-01-15 | 中石化石油工程技术服务有限公司 | Short-term Forecast of Natural Gas Load model based on wavelet theory and neural network |
CN109583635A (en) * | 2018-11-16 | 2019-04-05 | 贵州电网有限责任公司 | A kind of short-term load forecasting modeling method towards operational reliability |
CN110059844A (en) * | 2019-02-01 | 2019-07-26 | 东华大学 | Energy storage device control method based on set empirical mode decomposition and LSTM |
CN111784043A (en) * | 2020-06-29 | 2020-10-16 | 南京工程学院 | Accurate prediction method for power selling amount of power distribution station area based on modal GRU learning network |
CN111950805A (en) * | 2020-08-25 | 2020-11-17 | 润联软件系统(深圳)有限公司 | Medium-and-long-term power load prediction method and device, computer equipment and storage medium |
CN112070301A (en) * | 2020-09-07 | 2020-12-11 | 广东电网有限责任公司电力调度控制中心 | Method, system and equipment for adjusting power consumption of user |
-
2021
- 2021-02-26 CN CN202110219612.XA patent/CN112883649B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295798A (en) * | 2016-08-29 | 2017-01-04 | 江苏省电力试验研究院有限公司 | Empirical mode decomposition and Elman neural network ensemble wind-powered electricity generation Forecasting Methodology |
CN108256697A (en) * | 2018-03-26 | 2018-07-06 | 电子科技大学 | A kind of Forecasting Methodology for power-system short-term load |
CN109214607A (en) * | 2018-11-13 | 2019-01-15 | 中石化石油工程技术服务有限公司 | Short-term Forecast of Natural Gas Load model based on wavelet theory and neural network |
CN109583635A (en) * | 2018-11-16 | 2019-04-05 | 贵州电网有限责任公司 | A kind of short-term load forecasting modeling method towards operational reliability |
CN110059844A (en) * | 2019-02-01 | 2019-07-26 | 东华大学 | Energy storage device control method based on set empirical mode decomposition and LSTM |
CN111784043A (en) * | 2020-06-29 | 2020-10-16 | 南京工程学院 | Accurate prediction method for power selling amount of power distribution station area based on modal GRU learning network |
CN111950805A (en) * | 2020-08-25 | 2020-11-17 | 润联软件系统(深圳)有限公司 | Medium-and-long-term power load prediction method and device, computer equipment and storage medium |
CN112070301A (en) * | 2020-09-07 | 2020-12-11 | 广东电网有限责任公司电力调度控制中心 | Method, system and equipment for adjusting power consumption of user |
Non-Patent Citations (1)
Title |
---|
基于小波变换的混合神经网络短期负荷预测;康丽峰等;《电力需求侧管理》;20070720(第04期);第22-26页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112883649A (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110135580B (en) | Convolution network full integer quantization method and application method thereof | |
CN109816221B (en) | Project risk decision method, apparatus, computer device and storage medium | |
CN111242377B (en) | Short-term wind speed prediction method integrating deep learning and data denoising | |
Wu et al. | Time series forecasting with missing values | |
CN111950805B (en) | Medium-and-long-term power load prediction method and device, computer equipment and storage medium | |
CN112883649B (en) | Power load prediction method, system, computer equipment and storage medium | |
CN115169746A (en) | Power load short-term prediction method and device based on fusion model and related medium | |
CN114239945B (en) | Short-term power load prediction method, device, equipment and storage medium | |
CN111539558B (en) | Power load prediction method adopting optimization extreme learning machine | |
CN112949610A (en) | Improved Elman neural network prediction method based on noise reduction algorithm | |
CN112988548A (en) | Improved Elman neural network prediction method based on noise reduction algorithm | |
CN110222840B (en) | Cluster resource prediction method and device based on attention mechanism | |
Azami et al. | A new neural network approach for face recognition based on conjugate gradient algorithms and principal component analysis | |
CN116706907B (en) | Photovoltaic power generation prediction method based on fuzzy reasoning and related equipment | |
CN117575685A (en) | Data analysis early warning system and method | |
CN117094431A (en) | DWTfar meteorological data time sequence prediction method and equipment for multi-scale entropy gating | |
CN117010442A (en) | Equipment residual life prediction model training method, residual life prediction method and system | |
CN112706777B (en) | Method and device for adjusting driving behaviors of user under vehicle working conditions | |
CN112686330B (en) | KPI abnormal data detection method and device, storage medium and electronic equipment | |
Li et al. | An innovated integrated model using singular spectrum analysis and support vector regression optimized by intelligent algorithm for rainfall forecasting | |
Zheng et al. | Short-term load forecasting based on Gaussian wavelet SVM | |
CN113688989A (en) | Deep learning network acceleration method, device, equipment and storage medium | |
CN111382891A (en) | Short-term load prediction method and short-term load prediction device | |
CN112732777A (en) | Position prediction method, apparatus, device and medium based on time series | |
CN104992151A (en) | Age estimation method based on TFIDF face image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000 Applicant after: China Resources Digital Technology Co.,Ltd. Address before: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000 Applicant before: Runlian software system (Shenzhen) Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |