CN110322073A - Power load prediction method, device and equipment based on recurrent neural network - Google Patents
Power load prediction method, device and equipment based on recurrent neural network Download PDFInfo
- Publication number
- CN110322073A CN110322073A CN201910615214.2A CN201910615214A CN110322073A CN 110322073 A CN110322073 A CN 110322073A CN 201910615214 A CN201910615214 A CN 201910615214A CN 110322073 A CN110322073 A CN 110322073A
- Authority
- CN
- China
- Prior art keywords
- power load
- neural network
- layer
- data set
- recurrent neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 192
- 230000000306 recurrent effect Effects 0.000 title claims abstract description 97
- 238000000034 method Methods 0.000 title claims abstract description 60
- 125000004122 cyclic group Chemical group 0.000 claims abstract description 78
- 230000004927 fusion Effects 0.000 claims abstract description 46
- 238000003860 storage Methods 0.000 claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012300 Sequence Analysis Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Development Economics (AREA)
- Biophysics (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Primary Health Care (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Supply And Distribution Of Alternating Current (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a power load prediction method based on a recurrent neural network, which comprises the following steps: and sequentially inputting the acquired power load data sets to a plurality of layers of cyclic neural networks with different jump connections, wherein the output of each preset moment in the preset time length of each layer of cyclic neural network is the input of the corresponding moment of the next layer of cyclic neural network, and the hidden layer state of the last preset moment of each layer of cyclic neural network in the preset time length is respectively obtained. Respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time into the full-connection network to obtain the fusion characteristic of the power load data set; and predicting the power load value at the next preset moment by using the fusion characteristics. By applying the technical scheme provided by the embodiment of the invention, the information in the power load data is fully utilized, and the power load prediction precision is greatly improved. The invention also discloses a power load prediction device, equipment and a storage medium based on the recurrent neural network, and the device, the equipment and the storage medium have corresponding technical effects.
Description
Technical Field
The present invention relates to the field of power load prediction technologies, and in particular, to a power load prediction method, apparatus, device, and computer-readable storage medium based on a recurrent neural network.
Background
Accurate power load prediction is crucial to ensuring reliable and safe operation of a power system, and meanwhile, reliable guidance is provided for power production and power supply scheduling. The traditional method for predicting the power load has time sequence analysis, a regression model, an autoregressive moving average model and the like. Since the power load has a non-linear characteristic, and the non-linear fitting capability of the conventional method is poor, it is difficult to accurately predict the power load.
In recent years, because the neural network has strong self-learning and nonlinear fitting capabilities, the problem of power load prediction can be solved well, and the application of the neural network in the power load prediction becomes a hot point. The existing method for predicting the power load by using the neural network generally comprises the steps of compensating the predicted load output by a neural network prediction model by using a rough set theory so as to predict the power load, or predicting the short-term load by using an ant colony optimization algorithm-based recurrent neural network. However, both of them do not fully utilize information in the power load data, and the accuracy of power load prediction is low.
In summary, how to effectively solve the problems that information in the power load data cannot be fully utilized and the power load prediction accuracy is low is a problem that needs to be solved by those skilled in the art at present.
Disclosure of Invention
The invention aims to provide a power load prediction method based on a recurrent neural network, which fully utilizes information in power load data and greatly improves the power load prediction precision; another object of the present invention is to provide a power load prediction apparatus, device and computer readable storage medium based on a recurrent neural network.
In order to solve the technical problems, the invention provides the following technical scheme:
a power load prediction method based on a recurrent neural network, the power load prediction method comprising:
acquiring a pre-stored power load data set;
sequentially passing the power load data set through each layer of circulating neural network provided with different jump connections from the bottommost layer of circulating neural network to respectively obtain the hidden layer state of each layer of circulating neural network at the last preset time within a preset time length; the output of each preset moment in the preset time length of each layer of the cyclic neural network is the input of the next layer of the cyclic neural network at the corresponding moment; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer;
respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset duration into a fully-connected network to obtain the fusion characteristics of the power load data set;
and predicting the power load value at the next preset moment by using the fusion characteristics.
In an embodiment of the present invention, when the magnitude or dimension of the power load data in the power load data set is not consistent, after acquiring the pre-stored power load data set, before sequentially passing the power load data set through each layer of recurrent neural networks with different jump connections from the bottommost recurrent neural network, the method further includes:
normalizing each of the power load data in the set of power load data.
In an embodiment of the present invention, after predicting the power load value at the next preset time by using the fusion feature, the method further includes:
acquiring a real power load value at the next preset moment;
and verifying the effectiveness of the power load prediction model used in the prediction process by using the real power load value and the predicted power load value.
In one embodiment of the present invention, verifying the validity of the electrical load prediction model used in the prediction process using the real electrical load value and the predicted electrical load value comprises:
calculating the root mean square errors of the real power load values of a preset number of different preset moments and the corresponding predicted power load values respectively;
and verifying the effectiveness of the power load prediction model by using the root mean square error.
In an embodiment of the present invention, after calculating root mean square errors of the real power load values and the corresponding predicted power load values at a predetermined number of different preset times, the method further includes:
and performing gradient descent operation on the parameters of the recurrent neural networks of each layer according to the root-mean-square error.
A cyclic neural network-based power load prediction apparatus, the power load prediction apparatus comprising:
the data set acquisition module is used for acquiring a pre-stored power load data set;
the hidden layer state obtaining module is used for sequentially enabling the power load data set to pass through various layers of circulating neural networks with different jump connections from the bottommost layer of circulating neural network to respectively obtain the hidden layer state of each layer of circulating neural network at the last preset time within a preset time length; the output of each preset moment in the preset time length of each layer of the cyclic neural network is the input of the next layer of the cyclic neural network at the corresponding moment; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer;
a fusion characteristic obtaining module, configured to input a hidden layer state of each layer of the recurrent neural networks at a last preset time within the preset duration into a full-connection network, respectively, so as to obtain a fusion characteristic of the power load data set;
and the power load value prediction module is used for predicting the power load value at the next preset moment by utilizing the fusion characteristics.
In one embodiment of the present invention, the method further comprises:
and the normalization module is used for performing normalization operation on each electric load data in the electric load data set after acquiring the pre-stored electric load data set and before sequentially passing the electric load data set from the bottommost layer of cyclic neural network through each layer of cyclic neural network provided with different jump connections when the magnitude or dimension of the electric load data in the electric load data set is inconsistent.
In one embodiment of the present invention, the method further comprises:
the real value acquisition module is used for acquiring the real power load value at the next preset moment after predicting the power load value at the next preset moment by using the fusion characteristics;
and the validity checking module is used for checking the validity of the electric load prediction model used in the prediction process by using the real electric load value and the predicted electric load value.
A cyclic neural network-based power load prediction apparatus comprising:
a memory for storing a computer program;
a processor for implementing the steps of the recurrent neural network-based power load prediction method as described above when executing the computer program.
A computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the recurrent neural network-based power load prediction method as set forth above.
By applying the method provided by the embodiment of the invention, the acquired power load data sets are sequentially input into the multilayer recurrent neural networks with different jump connections, the output of each preset time in the preset time length of each layer of recurrent neural network is the input of the corresponding time of the next layer of recurrent neural network, and the hidden state of the last preset time in the preset time length of each layer of recurrent neural network is respectively obtained. And respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset time into the full-connection network to obtain the fusion characteristic of the power load data set, and predicting the power load value at the next preset time by using the fusion characteristic. The method has the advantages that different jump connections are set for each layer of cyclic neural network, each layer of cyclic neural network has modeling capacity with different time periods, output of each preset time in the preset duration of each layer of cyclic neural network is set as input of the corresponding time of the next layer of cyclic neural network, multi-scale time sequence dependence modeling of a power load data set is achieved, a multi-scale time structure in power load data is captured, fusion characteristics are obtained, the power load value is predicted based on the fusion characteristics, information in the power load data is fully utilized, and power load prediction accuracy is greatly improved.
Accordingly, embodiments of the present invention further provide a power load prediction apparatus based on a recurrent neural network, a device and a computer-readable storage medium corresponding to the power load prediction method based on a recurrent neural network, which have the above technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a method for predicting a power load based on a recurrent neural network according to the present invention;
FIG. 2 is a network structure diagram of a power load prediction method based on a recurrent neural network according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another embodiment of a method for predicting a power load based on a recurrent neural network according to an embodiment of the present invention;
FIG. 4 is a block diagram of an electrical load prediction apparatus based on a recurrent neural network according to an embodiment of the present invention;
fig. 5 is a block diagram of a power load prediction apparatus based on a recurrent neural network according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a flowchart of an implementation of a recurrent neural network-based power load prediction method according to an embodiment of the present invention, where the method may include the following steps:
s101: a pre-stored power load data set is obtained.
During operation of the power system, the generated power load data may be pre-stored to form a power load data set. Where a prediction of future power load values is required, a pre-stored power load data set may be obtained. The power load data set may be power load data for a user, or power load data for an enterprise or an organization, which is not limited in the embodiments of the present invention.
S102: and sequentially passing the power load data set from the bottommost layer of cyclic neural network through each layer of cyclic neural network with different jump connections to respectively obtain the hidden layer state of each layer of cyclic neural network at the last preset time within the preset time.
The output of each preset moment in the preset time length of each layer of the recurrent neural network is the input of the corresponding moment of the next layer of the recurrent neural network; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer.
The method comprises the steps that a plurality of layers of cyclic neural networks with different jump connections can be built layer by layer in advance, corresponding output and hidden layer states exist at each preset time of each layer of cyclic neural network, the output of each preset time of each layer of cyclic neural network is the product of the hidden layer state at the current time and the output weight matrix of the cyclic neural network at the current layer, and the output of each preset time in the preset time of each layer of cyclic neural network is the input of the corresponding time of the next layer of cyclic neural network. Each layer of the recurrent neural network can be constructed by the following formula:
wherein,andrespectively representing input and output at time t of the ith layer, ft i、Andrespectively representing the output of the ith layer forgetting gate, the input gate and the output gate,respectively representing a forgetting gate hidden layer weight matrix, an input gate hidden layer weight matrix, a cell state hidden layer weight matrix and an output gate hidden layer weight matrix of the ith layer,a forgetting gate input weight matrix, an input gate input weight matrix, a cell state input weight matrix and an output gate input weight matrix respectively representing the i-th layer, ViA hidden layer state weight matrix representing the ith layer,respectively representing a forgetting gate bias, an input gate bias, a cell state bias and an output gate bias of the ith layer,andrespectively representing the ith layer at time t and t-kiHidden state of the moment, kiIs the number of hopping connections for the i-th layer,andrespectively representing the ith layer at time t and t-kiThe state of the cells at the time of day,and (4) alternative information used by the ith layer for updating the cell state is represented.
The above formula can be simplified as:
the process of propagation forward from the input of the power load data set to the lowest recurrent neural network can be represented by the following formula:
wherein, ItAnd t ∈ 1., L represents preprocessed power load data,representing the hidden layer state of the ith layer of the recurrent neural network at the t moment,the output of the ith layer of the recurrent neural network at the t moment is shown, N and L respectively show the layer number and the last moment of the recurrent neural network, kiI ∈ 1.. and N denote the number of hopping connections of the ith layer of the recurrent neural network.
Different jump connections are set for each layer of recurrent neural network, and the output of each preset time within the preset duration of each layer of recurrent neural network is used as the input of the corresponding time of the next layer of recurrent neural network, so that the multi-scale time sequence dependency relationship of each layer of recurrent neural network is established. If the time interval of each layer of the recurrent neural network can be 1 day, each passing day is a preset time, and the setting of the time interval can be set according to actual needs, which is not limited in the embodiment of the present invention.
After the power load data set is obtained, the power load data set can be input into the bottommost layer of cyclic neural network, the output of the bottommost layer of cyclic neural network is used as the input of the next layer of cyclic neural network, and the rest can be done, so that the power load data set sequentially passes through the layers of cyclic neural networks with different jump connections from the bottommost layer of cyclic neural network, and hidden layer states of the layers of cyclic neural networks at the last preset time are respectively obtained.
S103: and respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset time into the full-connection network to obtain the fusion characteristic of the power load data set.
After the hidden layer state of each layer of the cyclic neural network at the last preset time is obtained, the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset time duration can be input into the full-connection network, so that the fusion characteristic of the power load data set is obtained, and the specific process can be expressed by the following formula:
wherein h isLFor the connection of hidden layer states at the last moment of each layer of the recurrent neural network,and (4) representing the hidden layer state of the ith layer of the recurrent neural network at the last preset moment.
S104: and predicting the power load value at the next preset moment by using the fusion characteristics.
After obtaining the fusion characteristics of the power load data set, the power load value at the next preset time can be predicted by using the fusion characteristics, and the specific process can be represented by the following formula:
c=Wf·hL+b;
where c is the output of the fully-connected neural network, WfIs a weight, hLAnd b is the connection of hidden layer states at the last moment of each layer of the recurrent neural network, and b is the bias.
By applying the method provided by the embodiment of the invention, the acquired power load data sets are sequentially input into the multilayer recurrent neural networks with different jump connections, the output of each preset time in the preset time length of each layer of recurrent neural network is the input of the corresponding time of the next layer of recurrent neural network, and the hidden state of the last preset time in the preset time length of each layer of recurrent neural network is respectively obtained. And respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset time into the full-connection network to obtain the fusion characteristic of the power load data set, and predicting the power load value at the next preset time by using the fusion characteristic. The method has the advantages that different jump connections are set for each layer of cyclic neural network, each layer of cyclic neural network has modeling capacity with different time periods, output of each preset time in the preset duration of each layer of cyclic neural network is set as input of the corresponding time of the next layer of cyclic neural network, multi-scale time sequence dependence modeling of a power load data set is achieved, a multi-scale time structure in power load data is captured, fusion characteristics are obtained, the power load value is predicted based on the fusion characteristics, information in the power load data is fully utilized, and power load prediction accuracy is greatly improved.
It should be noted that, based on the first embodiment, the embodiment of the present invention further provides a corresponding improvement scheme. In the following embodiments, steps that are the same as or correspond to those in the first embodiment may be referred to each other, and corresponding advantageous effects may also be referred to each other, which are not described in detail in the following modified embodiments.
In a specific example, as shown in fig. 2, the cyclic neural network with three layers of hopping connections 1, 2, and 4 respectively models the time sequence dependence of different scales in the power load data, and the power load data set is sliced in the prediction process to obtain a plurality of subsequences, where the time number of each subsequence is L. The power load value at the time L +1 is predicted from the power load value at the time 1 to L, and the power load value at the time L +2 is predicted from the power load value at the time 2 to L + 1. x is the number of1、x2、x3、x4、x5、x6、x7、x8And xLAnd respectively representing the power load values from 1 st to L th moments in a plurality of subsequences formed by slicing the power load data set, taking the output of each preset moment in the preset time length of the lowest layer of cyclic neural network as the input of the corresponding moment of the next layer of cyclic neural network, and respectively obtaining the hidden state of the last preset moment of each layer of cyclic neural network. Respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset time into the full connection layer to obtain the fusion characteristic of the power load data set, and predicting the power load value x at the next preset time by using the fusion characteristicT+1。
Referring to fig. 3, fig. 3 is a flowchart of another implementation of the method for predicting a power load based on a recurrent neural network according to the embodiment of the present invention, where the method may include the following steps:
s301: a pre-stored power load data set is obtained.
S302: when the magnitude or dimension of the power load data in the power load data set is not consistent, each power load data in the power load data set is normalized.
After the pre-stored power load data set is acquired, if the magnitude or dimension of the power load data in the power load data set is not the same, each power load in the power load data set may be subjected to different processingAnd carrying out normalization operation on the data. For a raw power load data set X ═ X1,x2,...,xs]TTo be normalized by the following formula:
wherein x isiPower load data representing the ith time in the original power load data set, S representing the total number of times in the original power load data set, S > L, IiThe normalized power load data is expressed, and max (x) and min (x) respectively express the maximum value and the minimum value of the power load data in the original power load data set.
So as to obtain normalized power load data set I ═ I1,I2,...,Is]T. By normalizing the power load data when the magnitude of the power load data in the power load data set is inconsistent with the dimension of the power load data, the speed of performing feature fusion on each layer of the cyclic neural network is accelerated, and the feature fusion effect is improved.
S303: and sequentially passing the power load data set from the bottommost layer of cyclic neural network through each layer of cyclic neural network with different jump connections to respectively obtain the hidden layer state of each layer of cyclic neural network at the last preset moment.
The output of each preset moment in the preset time length of each layer of the recurrent neural network is the input of the corresponding moment of the next layer of the recurrent neural network; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer.
S304: and respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset time into the full-connection network to obtain the fusion characteristic of the power load data set.
S305: and predicting the power load value at the next preset moment by using the fusion characteristics.
S306: and acquiring the real power load value at the next preset moment.
The true power load value at the next preset time can be obtained.
S307: and calculating the root mean square errors of the real power load values at different preset moments and the corresponding predicted power load values in a preset number.
After the power load value at the next preset time is predicted and the real power load value at the next preset time, that is, the real power load value corresponding to the predicted power load value is obtained, the root mean square errors of the real power load values at a predetermined number of different preset times and the predicted power load values respectively corresponding to the real power load values can be calculated. The calculation of the root mean square error can be represented by the following formula:
wherein n represents the number of predicted values,andrespectively representing the ith real value and the predicted value.
It should be noted that the predetermined number n may be set and adjusted according to actual situations, which is not limited in the embodiment of the present invention.
S308: and checking the effectiveness of the power load prediction model by using the root mean square error.
After the root mean square errors of the calculated real power load values at the preset number of different preset moments and the corresponding predicted power load values are obtained, the effectiveness of the power load prediction model can be checked by using the root mean square errors. For example, the root mean square error calculated by the method can be compared with the root mean square error calculated by the existing method, so that the effectiveness of the power load prediction model is tested.
S309: and performing gradient descent operation on the parameters of each layer of the recurrent neural network according to the root mean square error.
After the root mean square error of the predicted power load value is obtained, gradient descent operation can be carried out on the parameters of each layer of the recurrent neural network according to the root mean square error. For example, in the back propagation process of the recurrent neural network, the BP algorithm is used to minimize the loss function, so that the network parameters of each layer of recurrent neural network are optimized, and the power load prediction accuracy is further improved.
Corresponding to the above method embodiments, embodiments of the present invention further provide a power load prediction apparatus based on a recurrent neural network, and the power load prediction apparatus based on the recurrent neural network described below and the power load prediction method based on the recurrent neural network described above may be referred to correspondingly.
Referring to fig. 4, fig. 4 is a block diagram of a power load prediction apparatus based on a recurrent neural network according to an embodiment of the present invention, where the apparatus may include:
a data set acquiring module 41, configured to acquire a pre-stored power load data set;
the hidden layer state obtaining module 42 is configured to sequentially pass the power load data set through each layer of cyclic neural network provided with different jump connections from the bottommost cyclic neural network, and respectively obtain the hidden layer state of each layer of cyclic neural network at the last preset time; the output of each preset moment in the preset time length of each layer of the recurrent neural network is the input of the corresponding moment of the next layer of the recurrent neural network; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer;
a fusion characteristic obtaining module 43, configured to input the hidden layer state of each layer of the recurrent neural networks at the last preset time within a preset time duration to the full-connection network, respectively, so as to obtain a fusion characteristic of the power load data set;
and the power load value prediction module 44 is configured to predict the power load value at the next preset time by using the fusion characteristics.
By applying the device provided by the embodiment of the invention, the acquired power load data sets are sequentially input into the multilayer recurrent neural networks with different jump connections, the output of each preset time in the preset duration of each layer of recurrent neural network is the input of the corresponding time of the next layer of recurrent neural network, and the hidden layer state of the last preset time of each layer of recurrent neural network is respectively obtained. Respectively inputting the hidden layer state of each layer of cyclic neural network at the last preset time within a preset time into a full-connection network to obtain the fusion characteristic of the power load data set; and predicting the power load value at the next preset moment by using the fusion characteristics. The method has the advantages that different jump connections are set for each layer of cyclic neural network, each layer of cyclic neural network has modeling capacity with different time periods, output of each preset time in the preset duration of each layer of cyclic neural network is set as input of the corresponding time of the next layer of cyclic neural network, multi-scale time sequence dependence modeling of a power load data set is achieved, a multi-scale time structure in power load data is captured, fusion characteristics are obtained, the power load value is predicted based on the fusion characteristics, information in the power load data is fully utilized, and power load prediction accuracy is greatly improved.
In one embodiment of the present invention, the apparatus may further include:
and the normalization module is used for performing normalization operation on each electric load data in the electric load data set after the pre-stored electric load data set is obtained and before the electric load data set sequentially passes through each layer of cyclic neural network provided with different jump connections from the bottommost layer of cyclic neural network when the magnitude or dimension of the electric load data in the electric load data set is inconsistent.
In one embodiment of the present invention, the apparatus may further include:
the real value acquisition module is used for acquiring the real power load value at the next preset moment after predicting the power load value at the next preset moment by utilizing the fusion characteristics;
and the validity checking module is used for checking the validity of the electric load prediction model used in the prediction process by using the real electric load value and the predicted electric load value.
In one embodiment of the invention, the validity check module includes a root mean square error calculation sub-module and a validity check sub-module,
the root mean square error calculation submodule is used for calculating root mean square errors of real power load values at different preset moments and predicted power load values corresponding to the real power load values in a preset number;
and the validity checking sub-module is used for checking the validity of the power load prediction model by using the root mean square error.
In one embodiment of the present invention, the apparatus may further include:
and the gradient descent module is used for performing gradient descent operation on the parameters of each layer of the cyclic neural network according to the root mean square errors after calculating the root mean square errors of the real power load values of a preset number of different preset moments and the corresponding predicted power load values respectively.
Corresponding to the above method embodiment, referring to fig. 5, fig. 5 is a schematic diagram of a cyclic neural network-based power load prediction apparatus provided in the present invention, and the apparatus may include:
a memory 51 for storing a computer program;
the processor 52, when executing the computer program stored in the memory 51, may implement the following steps:
acquiring a pre-stored power load data set; sequentially passing the power load data set from the bottommost layer of cyclic neural network through each layer of cyclic neural network provided with different jump connections to respectively obtain the hidden layer state of each layer of cyclic neural network at the last preset moment; the output of each preset moment in the preset time length of each layer of the recurrent neural network is the input of the corresponding moment of the next layer of the recurrent neural network; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer; respectively inputting the hidden layer state of each layer of cyclic neural network at the last preset time within a preset time into a full-connection network to obtain the fusion characteristic of the power load data set; and predicting the power load value at the next preset moment by using the fusion characteristics.
For the introduction of the device provided by the present invention, please refer to the above method embodiment, which is not described herein again.
Corresponding to the above method embodiment, the present invention further provides a computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of:
acquiring a pre-stored power load data set; sequentially passing the power load data set from the bottommost layer of cyclic neural network through each layer of cyclic neural network provided with different jump connections to respectively obtain the hidden layer state of each layer of cyclic neural network at the last preset moment; the output of each preset moment in the preset time length of each layer of the recurrent neural network is the input of the corresponding moment of the next layer of the recurrent neural network; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer; respectively inputting the hidden layer state of each layer of cyclic neural network at the last preset time within a preset time into a full-connection network to obtain the fusion characteristic of the power load data set; and predicting the power load value at the next preset moment by using the fusion characteristics.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
For the introduction of the computer-readable storage medium provided by the present invention, please refer to the above method embodiments, which are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device, the apparatus and the computer-readable storage medium disclosed in the embodiments correspond to the method disclosed in the embodiments, so that the description is simple, and the relevant points can be referred to the description of the method.
The principle and the implementation of the present invention are explained in the present application by using specific examples, and the above description of the embodiments is only used to help understanding the technical solution and the core idea of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Claims (10)
1. A power load prediction method based on a recurrent neural network is characterized by comprising the following steps:
acquiring a pre-stored power load data set;
sequentially passing the power load data set through each layer of circulating neural network provided with different jump connections from the bottommost layer of circulating neural network to respectively obtain the hidden layer state of each layer of circulating neural network at the last preset time within a preset time length; the output of each preset moment in the preset time length of each layer of the cyclic neural network is the input of the next layer of the cyclic neural network at the corresponding moment; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer;
respectively inputting the hidden layer state of each layer of the cyclic neural network at the last preset time within the preset duration into a fully-connected network to obtain the fusion characteristics of the power load data set;
and predicting the power load value at the next preset moment by using the fusion characteristics.
2. The power load prediction method according to claim 1, wherein when the magnitudes or dimensions of the power load data in the power load data set are not consistent, after the pre-stored power load data set is acquired, before the power load data set is sequentially passed through the layers of the recurrent neural networks provided with different hopping connections from the bottommost recurrent neural network, the method further comprises:
normalizing each of the power load data in the set of power load data.
3. The power load prediction method according to claim 1 or 2, further comprising, after predicting the power load value at the next preset time using the fused feature:
acquiring a real power load value at the next preset moment;
and verifying the effectiveness of the power load prediction model used in the prediction process by using the real power load value and the predicted power load value.
4. The method of claim 3, wherein verifying the validity of the power load prediction model used in the prediction process using the real power load value and the predicted power load value comprises:
calculating the root mean square errors of the real power load values of a preset number of different preset moments and the corresponding predicted power load values respectively;
and verifying the effectiveness of the power load prediction model by using the root mean square error.
5. The power load prediction method according to claim 4, further comprising, after calculating root mean square errors of the real power load values and the corresponding predicted power load values at a predetermined number of different preset times:
and performing gradient descent operation on the parameters of the recurrent neural networks of each layer according to the root-mean-square error.
6. An electrical load prediction apparatus based on a recurrent neural network, the electrical load prediction apparatus comprising:
the data set acquisition module is used for acquiring a pre-stored power load data set;
the hidden layer state obtaining module is used for sequentially enabling the power load data set to pass through various layers of circulating neural networks with different jump connections from the bottommost layer of circulating neural network to respectively obtain the hidden layer states of the circulating neural networks at the last preset time; the output of each preset moment in the preset time length of each layer of the recurrent neural network is the input of the corresponding moment of the next layer of the recurrent neural network; the output of each preset time of each layer of the recurrent neural network is the product of the hidden state of the current time and the output weight matrix of the recurrent neural network of the current layer;
a fusion characteristic obtaining module, configured to input a hidden layer state of each layer of the recurrent neural networks at a last preset time within the preset duration into a full-connection network, respectively, so as to obtain a fusion characteristic of the power load data set;
and the power load value prediction module is used for predicting the power load value at the next preset moment by utilizing the fusion characteristics.
7. The electrical load prediction apparatus according to claim 6, further comprising:
and the normalization module is used for performing normalization operation on each electric load data in the electric load data set after acquiring the pre-stored electric load data set and before sequentially passing the electric load data set from the bottommost layer of cyclic neural network through each layer of cyclic neural network provided with different jump connections when the magnitude or dimension of the electric load data in the electric load data set is inconsistent.
8. The power load prediction device according to claim 6 or 7, further comprising:
the real value acquisition module is used for acquiring the real power load value at the next preset moment after predicting the power load value at the next preset moment by using the fusion characteristics;
and the validity checking module is used for checking the validity of the electric load prediction model used in the prediction process by using the real electric load value and the predicted electric load value.
9. A cyclic neural network-based power load prediction apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the recurrent neural network-based power load prediction method of any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the recurrent neural network-based power load prediction method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910615214.2A CN110322073A (en) | 2019-07-09 | 2019-07-09 | Power load prediction method, device and equipment based on recurrent neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910615214.2A CN110322073A (en) | 2019-07-09 | 2019-07-09 | Power load prediction method, device and equipment based on recurrent neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110322073A true CN110322073A (en) | 2019-10-11 |
Family
ID=68121663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910615214.2A Pending CN110322073A (en) | 2019-07-09 | 2019-07-09 | Power load prediction method, device and equipment based on recurrent neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110322073A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260030A (en) * | 2020-01-13 | 2020-06-09 | 润联软件系统(深圳)有限公司 | A-TCN-based power load prediction method and device, computer equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960252A (en) * | 2017-03-08 | 2017-07-18 | 深圳市景程信息科技有限公司 | Methods of electric load forecasting based on long Memory Neural Networks in short-term |
CN107622329A (en) * | 2017-09-22 | 2018-01-23 | 深圳市景程信息科技有限公司 | The Methods of electric load forecasting of Memory Neural Networks in short-term is grown based on Multiple Time Scales |
US20180276535A1 (en) * | 2017-03-27 | 2018-09-27 | Microsoft Technology Licensing, Llc | Input-output example encoding |
CN109245099A (en) * | 2018-10-29 | 2019-01-18 | 南方电网科学研究院有限责任公司 | Power load identification method, device, equipment and readable storage medium |
CN109711316A (en) * | 2018-12-21 | 2019-05-03 | 广东工业大学 | A kind of pedestrian recognition methods, device, equipment and storage medium again |
CN109740481A (en) * | 2018-12-26 | 2019-05-10 | 山东科技大学 | Atrial fibrillation Modulation recognition method of the CNN based on jump connection in conjunction with LSTM |
-
2019
- 2019-07-09 CN CN201910615214.2A patent/CN110322073A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960252A (en) * | 2017-03-08 | 2017-07-18 | 深圳市景程信息科技有限公司 | Methods of electric load forecasting based on long Memory Neural Networks in short-term |
US20180276535A1 (en) * | 2017-03-27 | 2018-09-27 | Microsoft Technology Licensing, Llc | Input-output example encoding |
CN107622329A (en) * | 2017-09-22 | 2018-01-23 | 深圳市景程信息科技有限公司 | The Methods of electric load forecasting of Memory Neural Networks in short-term is grown based on Multiple Time Scales |
CN109245099A (en) * | 2018-10-29 | 2019-01-18 | 南方电网科学研究院有限责任公司 | Power load identification method, device, equipment and readable storage medium |
CN109711316A (en) * | 2018-12-21 | 2019-05-03 | 广东工业大学 | A kind of pedestrian recognition methods, device, equipment and storage medium again |
CN109740481A (en) * | 2018-12-26 | 2019-05-10 | 山东科技大学 | Atrial fibrillation Modulation recognition method of the CNN based on jump connection in conjunction with LSTM |
Non-Patent Citations (2)
Title |
---|
CHANG S ETAL.: "Dilated recurrent neural", 《PROCEEDINGS OF THE 31ST INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS》 * |
张宇帆 等: "基于深度长短时记忆网络的区域级超短期负荷预测方法", 《电网技术》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260030A (en) * | 2020-01-13 | 2020-06-09 | 润联软件系统(深圳)有限公司 | A-TCN-based power load prediction method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160239592A1 (en) | Data-driven battery aging model using statistical analysis and artificial intelligence | |
US20230118702A1 (en) | Method, device and computer readable storage medium for estimating SOC of lithium battery | |
CN109768340B (en) | Method and device for estimating voltage inconsistency in battery discharge process | |
CN109800446B (en) | Method and device for estimating voltage inconsistency in discharging process of lithium ion battery | |
CN113791351B (en) | Lithium battery life prediction method based on transfer learning and difference probability distribution | |
CN112686380A (en) | Neural network-based echelon power cell consistency evaluation method and system | |
CN116316699A (en) | Large power grid frequency security situation prediction method, device and storage medium | |
CN116125279A (en) | Method, device, equipment and storage medium for determining battery health state | |
CN117674119A (en) | Power grid operation risk assessment method, device, computer equipment and storage medium | |
AU2021106200A4 (en) | Wind power probability prediction method based on quantile regression | |
CN110322073A (en) | Power load prediction method, device and equipment based on recurrent neural network | |
CN117833231A (en) | Load prediction method based on Bi-LSTM and dual-attention mechanism | |
CN115879378B (en) | Training method and device for expansion force prediction model of battery cell | |
CN116643177A (en) | Online battery health degree prediction method, device, equipment and medium | |
CN110610140A (en) | Training method, device and equipment of face recognition model and readable storage medium | |
CN116011071A (en) | Method and system for analyzing structural reliability of air building machine based on active learning | |
CN117114458A (en) | Active power distribution network power quality assessment method, device, terminal and storage medium | |
CN111967693B (en) | Search and rescue resource scheme adjusting method based on interference management and related equipment | |
CN113627655B (en) | Method and device for simulating and predicting pre-disaster fault scene of power distribution network | |
CN113723593B (en) | Cut load prediction method and system based on neural network | |
CN115239007A (en) | Power grid net load prediction method and device, electronic equipment and storage medium | |
CN109800923B (en) | Short-term power combination prediction method for distributed wind power generation | |
CN110309982A (en) | Power load prediction method, device and equipment based on matrix decomposition | |
Olowolaju et al. | A Fast Reliability and Sensitivity Analysis Approach for Composite Generation and Transmission Systems | |
CN118330469B (en) | Lithium ion battery health state estimation method based on tense graph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191011 |