CN112883649A - Power load prediction method, system, computer equipment and storage medium - Google Patents

Power load prediction method, system, computer equipment and storage medium Download PDF

Info

Publication number
CN112883649A
CN112883649A CN202110219612.XA CN202110219612A CN112883649A CN 112883649 A CN112883649 A CN 112883649A CN 202110219612 A CN202110219612 A CN 202110219612A CN 112883649 A CN112883649 A CN 112883649A
Authority
CN
China
Prior art keywords
component
data
prediction result
residual component
sample set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110219612.XA
Other languages
Chinese (zh)
Other versions
CN112883649B (en
Inventor
熊娇
刘雨桐
石强
张兴
王国勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Runlian Software System Shenzhen Co Ltd
Original Assignee
Runlian Software System Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Runlian Software System Shenzhen Co Ltd filed Critical Runlian Software System Shenzhen Co Ltd
Priority to CN202110219612.XA priority Critical patent/CN112883649B/en
Publication of CN112883649A publication Critical patent/CN112883649A/en
Application granted granted Critical
Publication of CN112883649B publication Critical patent/CN112883649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Strategic Management (AREA)
  • Computational Linguistics (AREA)
  • Human Resources & Organizations (AREA)
  • Biomedical Technology (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Medical Informatics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)

Abstract

The invention discloses a power load prediction method, a system, computer equipment and a storage medium, wherein the method comprises the following steps: decomposing and restoring historical power consumption data to obtain an approximate component and a residual component, and respectively performing characteristic processing to obtain an approximate component sample set and a residual component sample set; inputting the approximate component sample set into a neural network to obtain a prediction result of the approximate component; inputting the residual component sample set into a neural network to obtain a prediction result of the residual component; and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain a power load prediction result. According to the method and the device, the prediction result of the power load is counted by obtaining the prediction result of the approximate component and the prediction result of the residual component, so that the accuracy of the prediction of the power load is improved, and the prediction precision of the power load is further improved.

Description

Power load prediction method, system, computer equipment and storage medium
Technical Field
The present invention relates to the field of power load prediction technologies, and in particular, to a power load prediction method, a power load prediction system, a computer device, and a storage medium.
Background
With the increasingly intense competition in the power market, the demand of users is increasing, and the safe and economic operation of the power grid becomes very important. The power supply of the power generation department is kept stable and efficient following the actual load consumption situation, otherwise the safety and stability of the whole power system will be jeopardized. Therefore, short-term prediction of the power load is necessary, so that safe operation of a power grid can be effectively guaranteed, the power generation cost can be reduced, the user requirements can be met, and the social and economic benefits can be improved. However, the power consumption has obvious periodic characteristics and also has a plurality of random influence factors, so that a great deal of noise is formed, and the prediction speed and the prediction accuracy of the power load prediction are influenced.
Disclosure of Invention
The embodiment of the invention provides a power load prediction method, a power load prediction system, computer equipment and a storage medium, and aims to solve the problems of low prediction speed and low prediction precision of power load prediction caused by external random factors in the prior art.
In a first aspect, an embodiment of the present invention provides a power load prediction method, which includes:
obtaining historical power consumption data, and decomposing and restoring the historical power consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
respectively performing feature processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; performing feature extraction on sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain a prediction result of the residual component;
and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
In a second aspect, an embodiment of the present invention provides a power load prediction system, which includes:
the data decomposition and reduction module is used for acquiring historical power consumption data, decomposing and reducing the historical power consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
the characteristic processing module is used for respectively carrying out characteristic processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
the approximate component prediction result acquisition module is used for inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
a residual component prediction result obtaining module, configured to convert the residual component sample set into a time-frequency matrix, obtain a spectrogram of the time-frequency matrix, and obtain a residual component image sample set according to the spectrogram; performing feature extraction on sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain a prediction result of the residual component;
and the power load prediction result acquisition module is used for counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the power load prediction method according to the first aspect when executing the computer program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the power load prediction method according to the first aspect.
The embodiment of the invention provides a power load prediction method, a power load prediction system, computer equipment and a storage medium. The method comprises the steps of obtaining historical power consumption data, decomposing and restoring the historical power consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component; respectively performing feature processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set; inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component; converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; performing feature extraction on sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain a prediction result of the residual component; and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result. According to the embodiment of the invention, the prediction result of the power load is counted by obtaining the prediction result of the approximate component and the prediction result of the residual component without considering the influence of noise, so that the prediction speed of the power load prediction is increased, the accuracy of the power load prediction is improved, and the prediction precision of the power load is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a power load prediction method according to an embodiment of the present invention;
fig. 2 is a comparison diagram of a regional prediction result of the power load prediction method according to the embodiment of the present invention;
fig. 3 is a schematic block diagram of a power load prediction system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flow chart of a power load prediction method according to an embodiment of the present invention, where the method includes steps S101 to S105.
S101, obtaining historical power consumption data, decomposing and restoring the historical power consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
in this step, historical power consumption data within a period of time is obtained, the historical power consumption data is decomposed by wavelet transformation, and then the decomposed historical power consumption data is restored by wavelet transformation, so as to obtain a low-frequency subsequence (i.e. an approximate component) and a high-frequency subsequence (i.e. a residual component).
S102, respectively carrying out feature processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
in this step, the approximate components are first subjected to feature processing to obtain an approximate component sample set, and then the residual components are subjected to feature processing to obtain a residual component sample set.
In one embodiment, the step S102 includes:
acquiring historical power consumption data before a preset time period, and adding corresponding exogenous variables in combination with the prediction tasks of the approximate components to obtain an approximate component sample set;
and acquiring historical power consumption data before a preset time period, and combining the prediction tasks of the residual components to obtain a residual component sample set.
In this embodiment, according to historical power consumption data before a predetermined time period, adding exogenous variables of corresponding time to the prediction task of the approximate component in combination to obtain an approximate component sample set; the residual component sample set also needs to be obtained by combining the prediction task of the residual component according to historical power consumption data before a preset time period, and what is different from the approximate component sample set is that the residual component sample set does not need to be added with exogenous variables.
And performing characteristic processing on the approximate components, wherein the short-time power prediction generally needs to predict the power consumption at k moments in advance by t moments, and the prediction target is the approximate component data at the k moments. The method comprises the steps of firstly obtaining historical power consumption data before t + k moments, combining a prediction task of an approximate component, sorting easily-obtained influence factors with the maximum relevance, adding exogenous variables of corresponding time to the influence factors, and finally obtaining an approximate component sample set. The exogenous variables include: temperature, time period, rain and snow grade, whether sunny day, whether heating area is needed, whether working day is needed, whether New year is needed, whether clear, whether five is needed, whether morning festival, whether mid-autumn festival, whether national day festival, the day before holiday, and the like. And performing characteristic processing on the residual component, namely predicting the electricity consumption at k moments in advance by t moments. And acquiring historical power consumption data before t + k moments, and combining the prediction tasks of the residual components to obtain a residual component sample set.
As shown in fig. 2, k is set to 5h and t is set to 5 h. When the power consumption of 30-35h needs to be predicted, historical power consumption data of 0-20h is obtained 10h in advance, and then corresponding exogenous variables are added in combination with the prediction tasks of the approximate components to obtain an approximate component sample set of 30-35 h.
S103, inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
in this step, the sample data in the approximate component sample set needs to be input into a recurrent neural network (in this embodiment, the GRU neural network is used) for learning, and before being input into the recurrent neural network, the sample data is vector-adjusted by using the full-link layer, that is, the sample is sent into the full-link layer in the form of a feature vector at different step lengths, which is to enhance the self-adjusting capability of the network and adjust the input vector to a feature that the network can easily recognize and utilize. And inputting the adjusted sample data into a pre-constructed GRU neural network for learning to obtain approximate component characteristic data, and finally performing normalization processing on the approximate component characteristic data to obtain a final prediction result of the approximate component. The Softmax layer is a normalized exponential function, and the full-link layer, the GUR neural network and the Softmax layer adopted in the step can be constructed by a toolkit in python (a cross-platform computer programming language).
In an embodiment, the inputting the adjusted sample data into the GRU neural network for learning to obtain approximate component feature data comprises:
inputting the adjusted sample data into a pre-constructed GRU neural network;
updating the adjusted sample data by using an update gate in the GRU neural network to obtain updated data, and resetting the adjusted sample data by using a reset gate in the GRU neural network to obtain reset data;
and learning the adjusted sample data by using the updated data and the reset data to obtain approximate component characteristic data.
In this embodiment, a pre-constructed GRU neural network is used to process sample data after vector adjustment is performed on a full connection layer, an update gate is used to update the sample data, a reset gate is used to reset the updated sample data, and finally the updated sample data and the reset data are used to learn the sample data after vector adjustment, so as to obtain approximate component feature data. The GRU is one of RNNs, is provided for solving the problems of long-term memory, gradient in back propagation and the like the LSTM, and compared with the LSTM, the GRU has one less gating inside, so that the parameters are less than those of the LSTM, but the GRU can achieve the equivalent effect, and compared with the LSTM, the GRU is easier to train and can greatly improve the training efficiency.
In an embodiment, the inputting the feature data into a Softmax layer for normalization processing, and obtaining the prediction result of the approximate component includes:
inputting the characteristic data into a Softmax layer for normalization processing to obtain a positive infinite prediction result and a negative infinite prediction result;
and converting the positive infinite prediction result and the negative infinite prediction result into probabilities, and taking the prediction result with the maximum probability as the prediction result of the approximate component.
In this embodiment, the feature data input to the Softmax layer is normalized, a plurality of vector values each representing the probability of one prediction result are output, and the prediction result with the highest probability value is selected as the prediction result of the approximate component. The main role of the Softmax layer is to perform normalization processing, such as: the pictures in the data input to the Softmax layer for processing are classified into one hundred, and then the pictures are processed by the Softmax layer and output one hundred-dimensional vectors, wherein the first value in the vectors is the probability value that the current picture belongs to the first class, the second value in the vectors is the probability value that the current picture belongs to the second class, and so on, the first one hundred in the vectors is the probability value that the current picture belongs to the first class, and the sum of all classified vectors is 1.
S104, converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; performing feature extraction on sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain a prediction result of the residual component;
in the step, the residual component sample set is converted into a time-frequency matrix by using short-time Fourier transform to obtain a spectrogram of the time-frequency matrix, a residual component image sample set is formed, then, a pre-constructed DPN network is used for extracting residual component characteristic data of the residual component image sample set, and finally, the characteristic data is learned by using a pre-constructed convolutional neural network to obtain a prediction result of the residual component.
In an embodiment, the converting the residual component sample set into a time-frequency matrix and obtaining a spectrogram of the time-frequency matrix includes:
converting the residual component sample set into a time-frequency matrix through discrete short-time Fourier transform, and acquiring a spectrogram of the time-frequency matrix;
performing a discrete short-time fourier transform by the following equation:
Figure BDA0002954155160000071
where z () is the source signal, g () is the window function, m is the window length, T is the sampling frequency,
Figure BDA0002954155160000072
is a complex number, k ∈ (- ∞, + ∞),
Figure BDA0002954155160000073
in this embodiment, the short-time fourier transform is a time-frequency analysis method, which represents the signal characteristics at a certain time by a segment of signals in a time window. In the short-time Fourier transform process, the window length determines the time resolution and the frequency resolution of the spectrogram, the longer the window length is, the longer the intercepted signal is, the longer the signal is, the higher the frequency resolution is after the short-time Fourier transform is, and the worse the time resolution is; conversely, the shorter the window length, the shorter the intercepted signal, the poorer the frequency resolution, and the better the time resolution. In brief, the short-time fourier transform is to multiply a source signal function and a window function, then perform one-dimensional fourier transform, obtain a series of fourier transform results through sliding of the window function, and arrange the results to obtain a two-dimensional matrix.
In the processing of the residual component sample set, it is necessary to replace the continuous time fourier transform of the continuous time signal with a discrete fourier transform in each segment of the window, so as to obtain a short time fourier transform that is discrete in both time and frequency for the digital computer to implement.
In an embodiment, the converting the residual component sample set into a time-frequency matrix by a discrete short-time fourier transform, and acquiring a spectrogram of the time-frequency matrix includes:
acquiring specified parameters in the residual component sample set; the specified parameters include: source signal, window function, window length, number of overlapping points, sampling frequency and number of Fourier points;
calculating the signal length of the source signal, and calculating the sliding times of a window function according to the signal length, the window length and the number of overlapping points;
representing the source signal corresponding to each sliding of the window function as a column, determining the value of each column, and obtaining a sliding matrix with the designated row number and column number;
converting the window function into a column vector, and expanding the column vector into a vector matrix with a specified number of columns;
and performing dot multiplication operation on the sliding matrix and the vector matrix, performing Fourier transform on a dot multiplication result to obtain a time-frequency matrix, and acquiring a spectrogram of the time-frequency matrix.
In this embodiment, first, the determining of the designated parameters mainly includes: the system comprises a source signal, a window function, a window length, an overlapping point number, a sampling frequency and a Fourier point number, wherein the Fourier point number is mainly used in the Fourier transform process, when the signal length is smaller than the Fourier point number, the system can automatically perform zero padding, and then perform Fourier transform; after the designated parameters are determined, calculating the signal length of the source signal, and calculating the sliding times of the window function according to the signal length and by combining the window length and the number of overlapping points in the designated parameters; then, expressing the source signal corresponding to each sliding of the window function as a column, acquiring a sliding matrix with a specified row number and a specified column number, converting the window function into a column vector, and expanding the column vector to a vector matrix with the same column number as the sliding matrix; and performing dot multiplication operation on the sliding matrix and the vector matrix and performing fast Fourier transform on a dot multiplication result to obtain a time-frequency matrix, and outputting a spectrogram according to the time-frequency matrix.
In an embodiment, the extracting the features of the sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain the prediction result of the residual component includes:
performing feature extraction on sample data in the residual component image sample set by using a DPN (differential pulse-width network), inputting the extracted features into a maximum pooling layer and an average pooling layer respectively for pooling, summarizing the features subjected to the maximum pooling and the features subjected to the average pooling, inputting the summarized features into a flattening layer, and outputting feature vectors;
and performing normalization processing and over-fitting prevention processing on the feature vector, and inputting the processed feature vector into a linear full-connection layer to perform linear regression operation to obtain a prediction result of the residual component.
In this embodiment, the DPN network is used to extract the residual component feature data, the maximum pooling layer and the average pooling layer in the Python toolkit are called to pool the residual component feature data, all the pooled features are summarized, and the pooled features are input into the scatter layer (i.e., flattening layer) in the Python toolkit to dimension the multidimensional input (the summarized features) to obtain feature vectors, and then the feature vectors are input into a [ BatchNorm1d, Dropout, Linear (ReLU) ] module to be normalized, then input into a [ BatchNorm1d, Dropout, Linear ] module to be prevented from over-fitting, and finally subjected to Linear regression. The DPN is also called dual path network, which makes the model more fully utilize the characteristics, reduces unnecessary memory copy and obviously improves the training speed.
And S105, counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
In this step, the prediction result of the approximate component and the prediction result of the residual component are cumulatively added, and the cumulative result is the power load prediction result.
Referring to fig. 3, fig. 3 is a schematic block diagram of a power load prediction system according to an embodiment of the present invention, where the power load prediction system 200 includes:
the data decomposition and restoration module 201 is configured to obtain historical power consumption data, decompose and restore the historical power consumption data by using wavelet transformation, and obtain an approximate component and a residual component;
a feature processing module 202, configured to perform feature processing on the approximate component and the residual component respectively to obtain an approximate component sample set and a residual component sample set;
an approximate component prediction result obtaining module 203, configured to input sample data in the approximate component sample set to a full connection layer for vector adjustment, input the adjusted sample data to a GRU neural network for learning to obtain approximate component feature data, and input the feature data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
a residual component prediction result obtaining module 204, configured to convert the residual component sample set into a time-frequency matrix, obtain a spectrogram of the time-frequency matrix, and obtain a residual component image sample set according to the spectrogram; performing feature extraction on sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain a prediction result of the residual component;
the power load prediction result obtaining module 205 is configured to count a sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
In one embodiment, the feature processing module 202 includes:
the approximate component sample set acquisition module is used for acquiring historical power consumption data before a preset time period and adding corresponding exogenous variables in combination with a prediction task of the approximate component to obtain an approximate component sample set;
and the residual component sample set acquisition module is used for acquiring historical power consumption data before a preset time period and obtaining a residual component sample set by combining the prediction task of the residual component.
In one embodiment, the approximate component prediction result obtaining module 203 comprises:
the sample data input module is used for inputting the adjusted sample data into a pre-constructed GRU neural network;
the neural network processing module is used for updating the adjusted sample data by using an updating gate in the GRU neural network to obtain updated data, and resetting the adjusted sample data by using a resetting gate in the GRU neural network to obtain reset data;
and the approximate component characteristic data acquisition module is used for learning the adjusted sample data by utilizing the updated data and the reset data to obtain approximate component characteristic data.
In one embodiment, the approximate component prediction result obtaining module 203 comprises:
the normalization processing module is used for inputting the characteristic data into a Softmax layer for normalization processing to obtain a positive infinite prediction result and a negative infinite prediction result;
and the approximate component prediction result screening module is used for converting the positive infinite prediction result and the negative infinite prediction result into probabilities and taking the prediction result with the maximum probability as the prediction result of the approximate component.
In one embodiment, the residual component prediction result obtaining module 204 comprises:
the discrete short-time Fourier transform unit is used for converting the residual component sample set into a time-frequency matrix through discrete short-time Fourier transform and acquiring a spectrogram of the time-frequency matrix;
a formula calculation module for performing discrete short-time Fourier transform by the following formula:
Figure BDA0002954155160000101
where z () is the source signal, g () is the window function, m is the window length, T is the sampling frequency,
Figure BDA0002954155160000102
is a complex number, k ∈ (- ∞, + ∞),
Figure BDA0002954155160000103
in one embodiment, the discrete short time fourier transform unit comprises:
a specified parameter obtaining module, configured to obtain specified parameters in the residual component sample set; the specified parameters include: source signal, window function, window length, number of overlapping points, sampling frequency and number of Fourier points;
the window function sliding frequency calculating module is used for calculating the signal length of the source signal and calculating the sliding frequency of the window function according to the signal length, the window length and the number of the overlapped points;
the sliding matrix acquisition module is used for representing the source signals corresponding to each sliding of the window function as columns, determining the value of each column and acquiring a sliding matrix with the specified row number and column number;
the vector matrix acquisition module is used for converting the window function into a column vector and expanding the column vector into a vector matrix with specified column number;
and the spectrogram acquisition module is used for performing point multiplication operation on the sliding matrix and the vector matrix, performing fast Fourier transform on a point multiplication result to obtain a time-frequency matrix, and acquiring a spectrogram of the time-frequency matrix.
In one embodiment, the residual component prediction result obtaining module 204 comprises:
the characteristic vector acquisition module is used for extracting characteristics of sample data in the residual component image sample set by using a DPN (distributed DPN network), inputting the extracted characteristics to a maximum pooling layer and an average pooling layer respectively for pooling, summarizing the characteristics after the maximum pooling and the characteristics after the average pooling, inputting the summarized characteristics to a flattening layer, and outputting a characteristic vector;
and the feature vector processing module is used for carrying out normalization processing and over-fitting prevention processing on the feature vectors, inputting the processed feature vectors into a linear full-connection layer for linear regression operation, and obtaining the prediction result of the residual component.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the power load prediction method as described above is implemented.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for predicting the power load as described above is implemented.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method for predicting a power load, comprising:
obtaining historical power consumption data, and decomposing and restoring the historical power consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
respectively performing feature processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
converting the residual component sample set into a time-frequency matrix, acquiring a spectrogram of the time-frequency matrix, and acquiring a residual component image sample set according to the spectrogram; performing feature extraction on sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain a prediction result of the residual component;
and counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
2. The power load prediction method according to claim 1, wherein the performing feature processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set respectively comprises:
acquiring historical power consumption data before a preset time period, and adding corresponding exogenous variables in combination with the prediction tasks of the approximate components to obtain an approximate component sample set;
and acquiring historical power consumption data before a preset time period, and combining the prediction tasks of the residual components to obtain a residual component sample set.
3. The power load prediction method according to claim 1, wherein inputting the adjusted sample data into a GRU neural network for learning to obtain approximate component feature data comprises:
inputting the adjusted sample data into a pre-constructed GRU neural network;
updating the adjusted sample data by using an update gate in the GRU neural network to obtain updated data, and resetting the adjusted sample data by using a reset gate in the GRU neural network to obtain reset data;
and learning the adjusted sample data by using the updated data and the reset data to obtain approximate component characteristic data.
4. The power load prediction method according to claim 1, wherein the inputting the characteristic data into a Softmax layer for normalization processing to obtain the prediction result of the approximate component includes:
inputting the characteristic data into a Softmax layer for normalization processing to obtain a positive infinite prediction result and a negative infinite prediction result;
and converting the positive infinite prediction result and the negative infinite prediction result into probabilities, and taking the prediction result with the maximum probability as the prediction result of the approximate component.
5. The method according to claim 1, wherein the converting the residual component sample set into a time-frequency matrix and obtaining a spectrogram of the time-frequency matrix comprises:
converting the residual component sample set into a time-frequency matrix through discrete short-time Fourier transform, and acquiring a spectrogram of the time-frequency matrix;
performing a discrete short-time fourier transform by the following equation:
Figure FDA0002954155150000021
where z () is the source signal, g () is the window function, m is the window length, T is the sampling frequency,
Figure FDA0002954155150000022
is a complex number, k ∈ (- ∞, + ∞),
Figure FDA0002954155150000023
6. the method according to claim 5, wherein the transforming the residual component sample set into a time-frequency matrix by discrete short-time Fourier transform and obtaining a spectrogram of the time-frequency matrix comprises:
acquiring specified parameters in the residual component sample set; the specified parameters include: source signal, window function, window length, number of overlapping points, sampling frequency and number of Fourier points;
calculating the signal length of the source signal, and calculating the sliding times of a window function according to the signal length, the window length and the number of overlapping points;
representing the source signal corresponding to each sliding of the window function as a column, determining the value of each column, and obtaining a sliding matrix with the designated row number and column number;
converting the window function into a column vector, and expanding the column vector into a vector matrix with a specified number of columns;
and performing dot multiplication operation on the sliding matrix and the vector matrix, performing fast Fourier transform on a dot multiplication result to obtain a time-frequency matrix, and acquiring a spectrogram of the time-frequency matrix.
7. The power load prediction method according to claim 1, wherein the extracting the features of the sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain the prediction result of the residual component, comprises:
performing feature extraction on sample data in the residual component image sample set by using a DPN (differential pulse-width network), inputting the extracted features into a maximum pooling layer and an average pooling layer respectively for pooling, summarizing the features subjected to the maximum pooling and the features subjected to the average pooling, inputting the summarized features into a flattening layer, and outputting feature vectors;
and performing normalization processing and over-fitting prevention processing on the feature vector, and inputting the processed feature vector into a linear full-connection layer to perform linear regression operation to obtain a prediction result of the residual component.
8. An electrical load prediction system, comprising:
the data decomposition and reduction module is used for acquiring historical power consumption data, decomposing and reducing the historical power consumption data by utilizing wavelet transformation to obtain an approximate component and a residual component;
the characteristic processing module is used for respectively carrying out characteristic processing on the approximate component and the residual component to obtain an approximate component sample set and a residual component sample set;
the approximate component prediction result acquisition module is used for inputting sample data in the approximate component sample set to a full-connection layer for vector adjustment, inputting the adjusted sample data to a GRU neural network for learning to obtain approximate component characteristic data, and inputting the characteristic data to a Softmax layer for normalization processing to obtain a prediction result of the approximate component;
a residual component prediction result obtaining module, configured to convert the residual component sample set into a time-frequency matrix, obtain a spectrogram of the time-frequency matrix, and obtain a residual component image sample set according to the spectrogram; performing feature extraction on sample data in the residual component image sample set, and inputting the extracted residual component feature data into a pre-constructed convolutional neural network for learning to obtain a prediction result of the residual component;
and the power load prediction result acquisition module is used for counting the sum of the prediction result of the approximate component and the prediction result of the residual component to obtain the power load prediction result.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the power load prediction method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to execute the power load prediction method according to any one of claims 1 to 7.
CN202110219612.XA 2021-02-26 2021-02-26 Power load prediction method, system, computer equipment and storage medium Active CN112883649B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110219612.XA CN112883649B (en) 2021-02-26 2021-02-26 Power load prediction method, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110219612.XA CN112883649B (en) 2021-02-26 2021-02-26 Power load prediction method, system, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112883649A true CN112883649A (en) 2021-06-01
CN112883649B CN112883649B (en) 2023-08-11

Family

ID=76054806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110219612.XA Active CN112883649B (en) 2021-02-26 2021-02-26 Power load prediction method, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112883649B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554466A (en) * 2021-07-26 2021-10-26 国网四川省电力公司电力科学研究院 Short-term power consumption prediction model construction method, prediction method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295798A (en) * 2016-08-29 2017-01-04 江苏省电力试验研究院有限公司 Empirical mode decomposition and Elman neural network ensemble wind-powered electricity generation Forecasting Methodology
CN108256697A (en) * 2018-03-26 2018-07-06 电子科技大学 A kind of Forecasting Methodology for power-system short-term load
CN109214607A (en) * 2018-11-13 2019-01-15 中石化石油工程技术服务有限公司 Short-term Forecast of Natural Gas Load model based on wavelet theory and neural network
CN109583635A (en) * 2018-11-16 2019-04-05 贵州电网有限责任公司 A kind of short-term load forecasting modeling method towards operational reliability
CN110059844A (en) * 2019-02-01 2019-07-26 东华大学 Energy storage device control method based on set empirical mode decomposition and LSTM
CN111784043A (en) * 2020-06-29 2020-10-16 南京工程学院 Accurate prediction method for power selling amount of power distribution station area based on modal GRU learning network
CN111950805A (en) * 2020-08-25 2020-11-17 润联软件系统(深圳)有限公司 Medium-and-long-term power load prediction method and device, computer equipment and storage medium
CN112070301A (en) * 2020-09-07 2020-12-11 广东电网有限责任公司电力调度控制中心 Method, system and equipment for adjusting power consumption of user

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295798A (en) * 2016-08-29 2017-01-04 江苏省电力试验研究院有限公司 Empirical mode decomposition and Elman neural network ensemble wind-powered electricity generation Forecasting Methodology
CN108256697A (en) * 2018-03-26 2018-07-06 电子科技大学 A kind of Forecasting Methodology for power-system short-term load
CN109214607A (en) * 2018-11-13 2019-01-15 中石化石油工程技术服务有限公司 Short-term Forecast of Natural Gas Load model based on wavelet theory and neural network
CN109583635A (en) * 2018-11-16 2019-04-05 贵州电网有限责任公司 A kind of short-term load forecasting modeling method towards operational reliability
CN110059844A (en) * 2019-02-01 2019-07-26 东华大学 Energy storage device control method based on set empirical mode decomposition and LSTM
CN111784043A (en) * 2020-06-29 2020-10-16 南京工程学院 Accurate prediction method for power selling amount of power distribution station area based on modal GRU learning network
CN111950805A (en) * 2020-08-25 2020-11-17 润联软件系统(深圳)有限公司 Medium-and-long-term power load prediction method and device, computer equipment and storage medium
CN112070301A (en) * 2020-09-07 2020-12-11 广东电网有限责任公司电力调度控制中心 Method, system and equipment for adjusting power consumption of user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康丽峰等: "基于小波变换的混合神经网络短期负荷预测", 《电力需求侧管理》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554466A (en) * 2021-07-26 2021-10-26 国网四川省电力公司电力科学研究院 Short-term power consumption prediction model construction method, prediction method and device
CN113554466B (en) * 2021-07-26 2023-04-28 国网四川省电力公司电力科学研究院 Short-term electricity consumption prediction model construction method, prediction method and device

Also Published As

Publication number Publication date
CN112883649B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
Xiong et al. Short-term wind power forecasting based on Attention Mechanism and Deep Learning
Chen et al. A hybrid application algorithm based on the support vector machine and artificial intelligence: An example of electric load forecasting
CN110610280A (en) Short-term prediction method, model, device and system for power load
CN111950805B (en) Medium-and-long-term power load prediction method and device, computer equipment and storage medium
CN109615124B (en) SCADA master station load prediction method based on deep learning
CN112668611B (en) Kmeans and CEEMD-PE-LSTM-based short-term photovoltaic power generation power prediction method
CN115169746A (en) Power load short-term prediction method and device based on fusion model and related medium
CN114694379B (en) Traffic flow prediction method and system based on self-adaptive dynamic graph convolution
CN116561567A (en) Short-term photovoltaic power prediction model based on variation modal decomposition, construction method and application method
CN112883649A (en) Power load prediction method, system, computer equipment and storage medium
CN114330493A (en) CNN + BiLSTM + Attention wind power ultra-short term power prediction method and system
Zhang et al. Accurate ultra-short-term load forecasting based on load characteristic decomposition and convolutional neural network with bidirectional long short-term memory model
CN116706907B (en) Photovoltaic power generation prediction method based on fuzzy reasoning and related equipment
Sun et al. Short-term power load prediction based on VMD-SG-LSTM
Xinxin et al. Short-term wind speed forecasting based on a hybrid model of ICEEMDAN, MFE, LSTM and informer
CN117196105A (en) People number prediction method, device, computer equipment and storage medium
CN116885699A (en) Power load prediction method based on dual-attention mechanism
Huang et al. Research on PV power forecasting based on wavelet decomposition and temporal convolutional networks
CN113487068B (en) Short-term wind power prediction method based on long-term and short-term memory module
CN111126645A (en) Wind power prediction algorithm based on data mining technology and improved support vector machine
CN114239945A (en) Short-term power load prediction method, device, equipment and storage medium
CN113807605A (en) Power consumption prediction model training method, prediction method and prediction device
CN113112085A (en) New energy station power generation load prediction method based on BP neural network
CN112732777A (en) Position prediction method, apparatus, device and medium based on time series
CN112001519A (en) Power load prediction method based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000

Applicant after: China Resources Digital Technology Co.,Ltd.

Address before: Room 801, building 2, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong 518000

Applicant before: Runlian software system (Shenzhen) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant