CN111898799A - BFA-Elman-based power load prediction method - Google Patents

BFA-Elman-based power load prediction method Download PDF

Info

Publication number
CN111898799A
CN111898799A CN202010589377.0A CN202010589377A CN111898799A CN 111898799 A CN111898799 A CN 111898799A CN 202010589377 A CN202010589377 A CN 202010589377A CN 111898799 A CN111898799 A CN 111898799A
Authority
CN
China
Prior art keywords
load
data
time
neural network
elman
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010589377.0A
Other languages
Chinese (zh)
Other versions
CN111898799B (en
Inventor
李军
李�浩
王茂琦
郑伟伦
徐康民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010589377.0A priority Critical patent/CN111898799B/en
Publication of CN111898799A publication Critical patent/CN111898799A/en
Application granted granted Critical
Publication of CN111898799B publication Critical patent/CN111898799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Molecular Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Water Supply & Treatment (AREA)
  • Geometry (AREA)
  • Primary Health Care (AREA)

Abstract

The invention discloses a BFA-Elman-based power load prediction method. The method comprises the following steps: acquiring original load data, preprocessing and normalizing the load data of the original load data, removing irregular data and filling missing data; constructing an Elman neural network topological structure, determining parameters of the network, and initializing a weight value and a threshold value of the Elman neural network; setting parameters of a bacterial foraging algorithm BFA, and solving an optimal solution through bacterial decoding to obtain an optimal weight threshold; updating a weight threshold value of the Elman neural network, training the Elman neural network optimized based on the BFA algorithm, and predicting the future power load. The BFA-Elman-based power load prediction method is high in prediction accuracy and meets the demand of a power system on power load prediction.

Description

BFA-Elman-based power load prediction method
Technical Field
The invention relates to the technical field of power, in particular to a BFA-Elman-based power load prediction method.
Background
The power load prediction is one of important work contents of the power system. The short-term power load prediction is to fully research and utilize historical data aiming at the characteristics of randomness and uncertainty of short-term loads, establish a set of mathematical model which accords with the characteristics of continuity, periodicity and the like of the historical data, determine the power load of one or more days in the future and know the development change of the future load on the premise of meeting the higher precision.
Therefore, the power load prediction is effective guarantee for safe and economic operation of the power system. At present, traditional prediction technologies such as trend extrapolation and regression analysis used by most power load prediction systems are relatively backward, the prediction precision is not high, and the requirement of high prediction precision cannot be well met. In the trend of power generation and consumption marketization, as the scale of power systems is continuously enlarged, demands for accuracy, real-time performance, reliability and intelligence of load prediction are increasing day by day. Accordingly, improving the power load prediction accuracy has become an important research topic for power enterprises.
Disclosure of Invention
The invention aims to provide a BFA-Elman-based power load prediction method, so that the power load can be accurately, reliably predicted in real time.
The technical solution for realizing the purpose of the invention is as follows: a BFA-Elman-based power load prediction method comprises the following steps:
step S1: acquiring original load data, preprocessing and normalizing the load data of the original load data, removing irregular data and filling missing data;
step S2: constructing an Elman neural network topological structure, determining parameters of the network, and initializing a weight value and a threshold value of the Elman neural network;
step S3: setting parameters of a bacterial foraging algorithm BFA, and solving an optimal solution through bacterial decoding to obtain an optimal weight threshold;
step S4: updating a weight threshold value of the Elman neural network, training the Elman neural network optimized based on the BFA algorithm, and predicting the future power load.
Further, the step S1 of obtaining historical original load data, performing load data preprocessing and normalization on the original load data, removing irregular data and filling up missing data specifically includes:
step 11: processing missing data;
if the time interval before and after the missing data is within the set threshold, filling the missing data by adopting a linear interpolation method, and knowing the load values f at the n moment and the n + i momentnAnd fn+1If the intermediate data is missing, the load value at the intermediate time n + j is:
Figure BDA0002555798960000021
if the time interval is larger than the set threshold, adjacent data are adopted for replacement;
step 12: processing error data;
comparing the load at the time t with the load values at the previous time t-1 and the later time t +1, and if the difference value is larger than the threshold value, namely the variation range of the current load data exceeds +/-10% of the previous and later load values, adopting horizontal processing:
Figure BDA0002555798960000022
then
Figure BDA0002555798960000023
Wherein y (d, t) is the load value at the time t on the d-th day, y (d, t-1) is the load value at the time t-1 on the d-th day, y (d, t +1) is the load value at the time t +1 on the d-th day, and theta1、θ2Is a set range parameter;
the power load has periodicity, the load value at the same time in the historical data is maintained in a set range, the data beyond the range is corrected, and vertical processing is adopted:
Figure BDA0002555798960000024
then
Figure BDA0002555798960000025
Wherein y (d, t) is the load value at the time t on the d day,
Figure BDA0002555798960000026
the average value of the historical data at the same moment is shown, and theta is a set range parameter;
step 13: the method for normalizing the historical load data specifically comprises the following steps:
x′ij=lg(xij)
wherein x isijIs historical load, x'ijAnd carrying out logarithmic processing on the load for the normalized load to obtain the normalized load data of the continuous time sequence.
Further, in step S2, the Elman neural network topology is constructed, parameters of the network are determined, and weight values and threshold values of the Elman neural network are initialized, which are specifically as follows:
step 21: the Elman neural network consists of an input layer, an implicit layer and an output layer, and a feedback link exists in the implicit layer, namely a accepting layer exists:
a1(k)=tansig[LW1p+LW1a1(k-1)+b1]
a2(k)=purelin[LW2a1(k)+b2]
the hidden layer of the Elman neural network is a tansig neuron, the output layer is a purelin neuron, the combination of the two neurons approximates to any function within limited time, and the larger the number of the neurons in the hidden layer is, the higher the approximation precision is;
step 22: through an Elman neural network model, a nonlinear state space equation is expressed as follows:
y(k)=g(w3x(k)+b2)
x(k)=f(w1xc(k)+w2(u(k-1))+b1)
xc(k)=x(k-1)
in the above formula, k represents the time of the current state, y (k) is an input 1-dimensional node vector, x (k-1) and x (k) are node vectors of the hidden layer, u (k-1) and u (k) are input vectors of n dimensions, and x (k) is a time of the current statec(k) Is a feedback state vector of dimension m, w1,w2,w3Respectively representing the weight matrixes connected from the receiving layer to the hidden layer, from the input layer to the hidden layer and from the hidden layer to the output layer;
where f (. + -.) is the transfer function of the hidden layer neuron and tan sig is used, and g (. + -.) is the transfer function of the output layer neuron and purelin is used, b1,b2Respectively, selected thresholds in the input layer and the hidden layer.
Further, the step S3 of setting the parameter of the bacterial foraging algorithm BFA, and solving the optimal solution by bacterial decoding specifically includes:
defining the error function of the weight adjustment of the network at the moment k as follows:
Figure BDA0002555798960000031
where E (k) is the error function, y (k) is the expected output of the ith output node at time k, yd(k) The actual output of the ith output node at the moment k;
respectively matching the error value E to the weight value w1,w2,w3And (5) calculating a partial derivative to obtain:
Figure BDA0002555798960000032
wherein xj(k)、ui(k) As input node vectors, xc,i(k)、xc,l(k) In order to feed back the state vector,
Figure BDA0002555798960000033
as a weight matrix, fjF is hidden spiritA transfer function through the element;
finally, by the formula
Figure BDA0002555798960000041
And (5) solving the optimal weight to obtain an improved Elman neural network learning algorithm.
Compared with the prior art, the invention has the following remarkable advantages: (1) the BFA-Elman-based power load prediction method comprises the steps that historical load data are preprocessed, and an Elman neural network is optimized based on a Bacterial Foraging Algorithm (BFA), so that an accurate and effective load prediction model is established; (2) the prediction model can predict future power load data with high precision, ensure safe and reliable operation of a power system and realize maximization of social and economic benefits.
Drawings
FIG. 1 is a flow chart of a BFA-Elman-based power load prediction method of the present invention.
FIG. 2 is a diagram of an Elman neural network model.
Detailed Description
The invention discloses a BFA-Elman-based power load prediction method, which comprises the following steps:
step S1: acquiring original load data, preprocessing and normalizing the load data of the original load data, removing irregular data and filling missing data;
step S2: constructing an Elman neural network topological structure, determining parameters of the network, and initializing a weight value and a threshold value of the Elman neural network;
step S3: setting parameters of a bacterial foraging algorithm BFA, and solving an optimal solution through bacterial decoding to obtain an optimal weight threshold;
step S4: updating a weight threshold value of the Elman neural network, training the Elman neural network optimized based on the BFA algorithm, and predicting the future power load.
Further, the step S1 of obtaining historical original load data, performing load data preprocessing and normalization on the original load data, removing irregular data and filling up missing data specifically includes:
step 11: processing missing data;
if the time interval before and after the missing data is within the set threshold, filling the missing data by adopting a linear interpolation method, and knowing the load values f at the n moment and the n + i momentnAnd fn+1If the intermediate data is missing, the load value at the intermediate time n + j is:
Figure BDA0002555798960000042
if the time interval is larger than the set threshold, adjacent data are adopted for replacement;
step 12: processing error data;
comparing the load at the time t with the load values at the previous time t-1 and the later time t +1, and if the difference value is larger than the threshold value, namely the variation range of the current load data exceeds +/-10% of the previous and later load values, adopting horizontal processing:
Figure BDA0002555798960000051
then
Figure BDA0002555798960000055
Wherein y (d, t) is the load value at the time t on the d-th day, y (d, t-1) is the load value at the time t-1 on the d-th day, y (d, t +1) is the load value at the time t +1 on the d-th day, and theta1、θ2Is a set range parameter;
the power load has periodicity, the load value at the same time in the historical data is maintained in a set range, the data beyond the range is corrected, and vertical processing is adopted:
Figure BDA0002555798960000054
then
Figure BDA0002555798960000052
Wherein y (d, t) is the load value at the time t on the d day,
Figure BDA0002555798960000053
the average value of the historical data at the same moment is shown, and theta is a set range parameter;
step 13: the method for normalizing the historical load data specifically comprises the following steps:
x′ij=lg(xij)
wherein x isijIs historical load, x'ijAnd carrying out logarithmic processing on the load for the normalized load to obtain the normalized load data of the continuous time sequence.
Further, in step S2, the Elman neural network topology is constructed, parameters of the network are determined, and weight values and threshold values of the Elman neural network are initialized, which are specifically as follows:
step 21: the Elman neural network consists of an input layer, an implicit layer and an output layer, and a feedback link exists in the implicit layer, namely a accepting layer exists:
a1(k)=tansig[LW1p+LW1a1(k-1)+b1]
a2(k)=purelin[LW2a1(k)+b2]
the hidden layer of the Elman neural network is a tansig neuron, the output layer is a purelin neuron, the combination of the two neurons approximates to any function within limited time, and the larger the number of the neurons in the hidden layer is, the higher the approximation precision is;
step 22: through an Elman neural network model, a nonlinear state space equation is expressed as follows:
y(k)=g(w3x(k)+b2)
x(k)=f(w1xc(k)+w2(u(k-1))+b1)
xc(k)=x(k-1)
in the above formula, k represents the time of the current state, y (k) is an input 1-dimensional node vector, x (k-1) and x (k) are node vectors of the hidden layer, u (k-1) and u (k) are input vectors of n dimensions, and x (k) is a time of the current statec(k) Is a feedback state vector of dimension m, w1,w2,w3Respectively representing the weight matrixes connected from the receiving layer to the hidden layer, from the input layer to the hidden layer and from the hidden layer to the output layer;
where f (. + -.) is the transfer function of the hidden layer neuron and tan sig is used, and g (. + -.) is the transfer function of the output layer neuron and purelin is used, b1,b2Respectively, selected thresholds in the input layer and the hidden layer.
Further, the step S3 of setting the parameter of the bacterial foraging algorithm BFA, and solving the optimal solution by bacterial decoding specifically includes:
defining the error function of the weight adjustment of the network at the moment k as follows:
Figure BDA0002555798960000061
where E (k) is the error function, y (k) is the expected output of the ith output node at time k, yd(k) The actual output of the ith output node at the moment k;
respectively matching the error value E to the weight value w1,w2,w3And (5) calculating a partial derivative to obtain:
Figure BDA0002555798960000062
wherein xj(k)、ui(k) As input node vectors, xc,i(k)、xc,l(k) In order to feed back the state vector,
Figure BDA0002555798960000064
as a weight matrix, fjF () is the transfer function of the hidden layer neuron;
finally, by the formula
Figure BDA0002555798960000063
And (5) solving the optimal weight to obtain an improved Elman neural network learning algorithm.
The invention is described in detail below with reference to the drawings and specific embodiments to enable the functions and features of the invention to be better understood. The present embodiment is implemented on the premise of the scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.
Examples
With reference to fig. 1, the method for predicting the short-term power load based on the BFA-Elman neural network of the embodiment of the present invention includes the following steps:
step 1, carrying out data preprocessing on original load data.
Short-term load prediction requires a large amount of historical load data, so accurate prediction first emphasizes the collection and analysis of raw data. The data preprocessing is to process the historical load data before the historical load data is utilized, remove irregular data and fill up missing data, eliminate the influence of bad data or bad data, and guarantee the accuracy of load prediction.
The step 1 comprises the following five substeps, as follows:
step 11: and processing missing data. If the time interval between the front and the back of the missing data is not large, the linear interpolation method is adopted to complement the missing data. For example: if the load value f at the n moment and the n + i moment is knownnAnd fn+1And if the intermediate data is lacked, the value of the intermediate time n + j is as follows:
Figure BDA0002555798960000071
if the time interval is large, the effect of linear interpolation is not ideal, and data of adjacent days is used instead.
Step 12: and processing error data. The load at a certain moment is compared with the load values before and after the moment, and if the difference is larger than a certain threshold value, namely the variation range of the load data is beyond +/-10% of the load values before and after the moment, horizontal processing is adopted. The load value at a certain time is compared with the load values at the same time of the previous day and the previous two days, and if the deviation is out of + -10%, vertical processing is adopted. The two treatment modes are respectively as follows:
step 13: and (4) horizontal treatment. The electric load has continuity, and the load in the adjacent time periods before and after does not have sudden change generally, so the maximum variation range of the data of the time to be processed can be set by taking the load data at the two moments before and after as a reference.
If the data to be processed exceeds the range, the data is regarded as bad data, and then the average value method is adopted for smooth calculation:
Figure BDA0002555798960000072
then
Figure BDA0002555798960000073
Wherein y (d, t) is the load value at the time t on the d-th day, y (d, t-1) is the load value at the time t-1 on the d-th day, y (d, t +1) is the load value at the time t +1 on the d-th day, and theta1、θ2Is a set range parameter.
Step 14: and (6) vertically processing. Considering the periodicity of the power load, different dates, especially the days before and after, should have approximately the same load pattern, and the load value at the same time should be maintained within a certain range, and the bad data beyond the range should be corrected.
Figure BDA0002555798960000074
Then
Figure BDA0002555798960000075
Wherein y (d, t) is the load value at the time t on the d day,
Figure BDA0002555798960000081
and theta is the average value of the loads of the historical data at the same time, and is a set range parameter.
Step 15: and carrying out normalization processing on the sample data.
In order to eliminate the influence of the difference in the characteristic index units and the difference in the order of magnitude of the characteristic index, it is necessary to normalize them so that each index value is unified in a certain common numerical characteristic range.
Normalization processing of historical load data, wherein logarithmic processing is adopted for loads:
x′ij=lg(xij) (4)
wherein x isijIs original load, x'ijIs the normalized load.
And 2, constructing a proper Elman network topological structure.
The step 2 comprises the following two substeps, as follows:
step 21: FIG. 2 is a diagram of an Elman neural network model of the present invention. Compared with the RBF neural network, the Elman neural network is composed of an input layer, an implicit layer and an output layer, and a feedback link exists in the implicit layer, namely a unique accepting layer exists.
In fig. 2:
a1(k)=tansig[LW1p+LW1a1(k-1)+b1](5)
a2(k)=purelin[LW2a1(k)+b2](6)
the hidden layer of the Elman network is tansig neuron, the output layer is purelin neuron, the combination of the two neurons can approximate any function within a limited time, and the higher the number of neurons of the hidden layer, the higher the approximation precision.
Step 22: through the Elman neural network model diagram shown in fig. 2, the nonlinear state space equation can be expressed as follows:
y(k)=g(w3x(k)+b2) (7)
x(k)=f(w1xc(k)+w2(u(k-1))+b1) (8)
xc(k)=x(k-1) (9)
in the above formula, k represents the time of the current state, y (k) is an input 1-dimensional node vector, x (k-1) and x (k) are node vectors of the hidden layer, u (k-1) and u (k) are input vectors of n dimensions, and x (k) is a time of the current statec(k) Is a feedback state vector of dimension m. w is a1,w2,w3Then respectively show thatAnd carrying the weight matrix connected from the layer to the hidden layer, from the input layer to the hidden layer and from the hidden layer to the output layer. In the formula, f (×) is used as the transfer function of hidden layer neuron, generally tansig function, g (×) is used as the transfer function of output layer neuron, generally purelin function, b1,b2Respectively, selected thresholds in the input layer and the hidden layer.
And 3, decoding the bacteria, and solving an optimal solution to obtain an optimal weight threshold value.
The step 3 comprises the following eight substeps, as follows:
step 31: defining the error function of the weight adjustment of the network at the moment k as follows:
Figure BDA0002555798960000091
in the formula: e (k) is the error function, and y (k) is the expected output of the ith output node at time k. y isd(k) The actual output of the ith output node at time k.
Step 32: the error function E is added to the weight value w3The partial derivatives are calculated to obtain:
Figure BDA0002555798960000092
step 33: order to
Figure BDA0002555798960000093
Then
Figure BDA0002555798960000094
Step 34: the error value E is compared with the weight value w2The partial derivatives are calculated to obtain:
Figure BDA0002555798960000095
step 35: also, let
Figure BDA0002555798960000096
Then there is
Figure BDA0002555798960000097
Step 36: the error value E is compared with the weight value w1The partial derivatives are calculated to obtain:
Figure BDA0002555798960000098
step 37: from the above formula, it can be found that:
Figure BDA0002555798960000099
step 38: by the formula
Figure BDA0002555798960000101
An improved Elman neural network learning algorithm can be obtained, and the calculation process is as follows:
Figure BDA0002555798960000102
wherein
Figure BDA0002555798960000103
And 4, updating the optimal weight threshold value obtained by decoding by using a BFA algorithm, training by using the BFA-Elman model, and predicting the power load in a short term in the future.
In the invention, the BFA algorithm can improve the weight and the threshold obtained by self-adaption to obtain a good effect, the diversity of the topological structure of the neural network and the uncertainty of the weight threshold are solved, the short-term power load prediction model based on the BFA-Elman neural network is an effective load prediction model, the future power load data can be predicted with high precision, the safe and reliable operation of a power system is ensured, and the social and economic benefits are maximized.

Claims (4)

1. A BFA-Elman-based power load prediction method is characterized by comprising the following steps:
step S1: acquiring original load data, preprocessing and normalizing the load data of the original load data, removing irregular data and filling missing data;
step S2: constructing an Elman neural network topological structure, determining parameters of the network, and initializing a weight value and a threshold value of the Elman neural network;
step S3: setting parameters of a bacterial foraging algorithm BFA, and solving an optimal solution through bacterial decoding to obtain an optimal weight threshold;
step S4: updating a weight threshold value of the Elman neural network, training the Elman neural network optimized based on the BFA algorithm, and predicting the future power load.
2. The BFA-Elman-based power load prediction method according to claim 1, wherein said step S1 of obtaining historical original load data, preprocessing and normalizing the original load data for load data, removing irregular data and filling missing data specifically comprises:
step 11: processing missing data;
if the time interval before and after the missing data is within the set threshold, filling the missing data by adopting a linear interpolation method, and knowing the load values f at the n moment and the n + i momentnAnd fn+1If the intermediate data is missing, the load value at the intermediate time n + j is:
Figure FDA0002555798950000011
if the time interval is larger than the set threshold, adjacent data are adopted for replacement;
step 12: processing error data;
comparing the load at the time t with the load values at the previous time t-1 and the later time t +1, and if the difference value is larger than the threshold value, namely the variation range of the current load data exceeds +/-10% of the previous and later load values, adopting horizontal processing:
Figure FDA0002555798950000012
then
Figure FDA0002555798950000013
Wherein y (d, t) is the load value at the time t on the d-th day, y (d, t-1) is the load value at the time t-1 on the d-th day, y (d, t +1) is the load value at the time t +1 on the d-th day, and theta1、θ2Is a set range parameter;
the power load has periodicity, the load value at the same time in the historical data is maintained in a set range, the data beyond the range is corrected, and vertical processing is adopted:
Figure FDA0002555798950000021
then
Figure FDA0002555798950000022
Wherein y (d, t) is the load value at the time t on the d day,
Figure FDA0002555798950000023
the average value of the historical data at the same moment is shown, and theta is a set range parameter;
step 13: the method for normalizing the historical load data specifically comprises the following steps:
x′ij=lg(xij)
wherein x isijIs historical load, x'ijAnd carrying out logarithmic processing on the load for the normalized load to obtain the normalized load data of the continuous time sequence.
3. The BFA-Elman-based power load prediction method according to claim 1, wherein said step S2 is constructing an Elman neural network topology, determining parameters of the network, initializing weights and thresholds of the Elman neural network, specifically as follows:
step 21: the Elman neural network consists of an input layer, an implicit layer and an output layer, and a feedback link exists in the implicit layer, namely a accepting layer exists:
a1(k)=tansig[LW1p+LW1a1(k-1)+b1]
a2(k)=purelin[LW2a1(k)+b2]
the hidden layer of the Elman neural network is a tansig neuron, the output layer is a purelin neuron, the combination of the two neurons approximates to any function within limited time, and the larger the number of the neurons in the hidden layer is, the higher the approximation precision is;
step 22: through an Elman neural network model, a nonlinear state space equation is expressed as follows:
y(k)=g(w3x(k)+b2)
x(k)=f(w1xc(k)+w2(u(k-1))+b1)
xc(k)=x(k-1)
in the above formula, k represents the time of the current state, y (k) is an input 1-dimensional node vector, x (k-1) and x (k) are node vectors of the hidden layer, u (k-1) and u (k) are input vectors of n dimensions, and x (k) is a time of the current statec(k) Is a feedback state vector of dimension m, w1,w2,w3Respectively representing the weight matrixes connected from the receiving layer to the hidden layer, from the input layer to the hidden layer and from the hidden layer to the output layer;
where f (. + -.) is the transfer function of the hidden layer neuron and tan sig is used, and g (. + -.) is the transfer function of the output layer neuron and purelin is used, b1,b2Respectively, selected thresholds in the input layer and the hidden layer.
4. The BFA-Elman-based power load prediction method according to claim 1, wherein said step S3 of setting parameters of bacterial foraging algorithm BFA, through bacterial decoding, solving for an optimal solution, specifically comprises:
defining the error function of the weight adjustment of the network at the moment k as follows:
Figure FDA0002555798950000031
where E (k) is the error function, y (k) is the expected output of the ith output node at time k, yd(k) The actual output of the ith output node at the moment k;
respectively matching the error value E to the weight value w1,w2,w3And (5) calculating a partial derivative to obtain:
Figure FDA0002555798950000032
wherein xj(k)、ui(k) As input node vectors, xc,i(k)、xc,l(k) In order to feed back the state vector,
Figure FDA0002555798950000033
as a weight matrix, fjF () is the transfer function of the hidden layer neuron;
finally, by the formula
Figure FDA0002555798950000034
And (5) solving the optimal weight to obtain an improved Elman neural network learning algorithm.
CN202010589377.0A 2020-06-24 2020-06-24 BFA-Elman-based power load prediction method Active CN111898799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010589377.0A CN111898799B (en) 2020-06-24 2020-06-24 BFA-Elman-based power load prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010589377.0A CN111898799B (en) 2020-06-24 2020-06-24 BFA-Elman-based power load prediction method

Publications (2)

Publication Number Publication Date
CN111898799A true CN111898799A (en) 2020-11-06
CN111898799B CN111898799B (en) 2022-09-27

Family

ID=73207129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010589377.0A Active CN111898799B (en) 2020-06-24 2020-06-24 BFA-Elman-based power load prediction method

Country Status (1)

Country Link
CN (1) CN111898799B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191575A (en) * 2021-06-16 2021-07-30 广东电网有限责任公司 Power distribution network maintenance power failure mode optimization method and device, terminal and storage medium
CN114841472A (en) * 2022-06-28 2022-08-02 浙江机电职业技术学院 GWO optimized Elman power load prediction method based on DNA hairpin variation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636822A (en) * 2015-01-21 2015-05-20 广州市香港科大霍英东研究院 Residential load prediction method of elman-based neural network
CN106022549A (en) * 2016-07-28 2016-10-12 兰州理工大学 Short term load predication method based on neural network and thinking evolutionary search
CN110009136A (en) * 2019-03-12 2019-07-12 国网江西省电力有限公司电力科学研究院 A kind of load forecasting method of distribution transformer and distribution line
CN110097236A (en) * 2019-05-16 2019-08-06 南京工程学院 A kind of short-term load forecasting method based on FA optimization Elman neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636822A (en) * 2015-01-21 2015-05-20 广州市香港科大霍英东研究院 Residential load prediction method of elman-based neural network
CN106022549A (en) * 2016-07-28 2016-10-12 兰州理工大学 Short term load predication method based on neural network and thinking evolutionary search
CN110009136A (en) * 2019-03-12 2019-07-12 国网江西省电力有限公司电力科学研究院 A kind of load forecasting method of distribution transformer and distribution line
CN110097236A (en) * 2019-05-16 2019-08-06 南京工程学院 A kind of short-term load forecasting method based on FA optimization Elman neural network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191575A (en) * 2021-06-16 2021-07-30 广东电网有限责任公司 Power distribution network maintenance power failure mode optimization method and device, terminal and storage medium
CN113191575B (en) * 2021-06-16 2024-01-23 广东电网有限责任公司 Optimization method, device, terminal and storage medium for power distribution network maintenance power failure mode
CN114841472A (en) * 2022-06-28 2022-08-02 浙江机电职业技术学院 GWO optimized Elman power load prediction method based on DNA hairpin variation
CN114841472B (en) * 2022-06-28 2022-10-11 浙江机电职业技术学院 GWO optimization Elman power load prediction method based on DNA hairpin variation

Also Published As

Publication number Publication date
CN111898799B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110705743B (en) New energy consumption electric quantity prediction method based on long-term and short-term memory neural network
CN111582542B (en) Power load prediction method and system based on anomaly repair
CN111898799B (en) BFA-Elman-based power load prediction method
CN112990556A (en) User power consumption prediction method based on Prophet-LSTM model
CN108920812B (en) Machining surface roughness prediction method
CN111890127B (en) Cutting state edge intelligent monitoring method based on online incremental wear evolution model
CN110264079B (en) Hot-rolled product quality prediction method based on CNN algorithm and Lasso regression model
CN112784920B (en) Yun Bianduan coordinated rotating component reactance domain self-adaptive fault diagnosis method
CN112329990A (en) User power load prediction method based on LSTM-BP neural network
CN111898825A (en) Photovoltaic power generation power short-term prediction method and device
CN111625399A (en) Method and system for recovering metering data
CN112529147A (en) Method and device for predicting ammonia nitrogen content in cross-section water quality
CN111967655A (en) Short-term load prediction method and system
CN112365098A (en) Power load prediction method, device, equipment and storage medium
CN116852665A (en) Injection molding process parameter intelligent adjusting method based on mixed model
Li et al. Short-term traffic flow prediction based on recurrent neural network
CN112765894B (en) K-LSTM-based aluminum electrolysis cell state prediction method
CN111539558B (en) Power load prediction method adopting optimization extreme learning machine
CN112014757A (en) Battery SOH estimation method integrating capacity increment analysis and genetic wavelet neural network
CN112149896A (en) Attention mechanism-based mechanical equipment multi-working-condition fault prediction method
CN116432822A (en) Carbon emission data prediction method, system, equipment and readable storage medium
CN115619028A (en) Clustering algorithm fusion-based power load accurate prediction method
Wang et al. Short term load forecasting: A dynamic neural network based genetic algorithm optimization
CN115081551A (en) RVM line loss model building method and system based on K-Means clustering and optimization
CN106327079B (en) Evaluation method for reactive power optimization control of power distribution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant