CN112200383A - Power load prediction method based on improved Elman neural network - Google Patents

Power load prediction method based on improved Elman neural network Download PDF

Info

Publication number
CN112200383A
CN112200383A CN202011168134.6A CN202011168134A CN112200383A CN 112200383 A CN112200383 A CN 112200383A CN 202011168134 A CN202011168134 A CN 202011168134A CN 112200383 A CN112200383 A CN 112200383A
Authority
CN
China
Prior art keywords
layer
output
neural network
elman neural
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011168134.6A
Other languages
Chinese (zh)
Other versions
CN112200383B (en
Inventor
章伟斌
史旭华
蓝艇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Lxing Polytron Technologies Inc
Original Assignee
Ningbo Lxing Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Lxing Polytron Technologies Inc filed Critical Ningbo Lxing Polytron Technologies Inc
Priority to CN202011168134.6A priority Critical patent/CN112200383B/en
Publication of CN112200383A publication Critical patent/CN112200383A/en
Application granted granted Critical
Publication of CN112200383B publication Critical patent/CN112200383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Optimization (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Analysis (AREA)
  • Marketing (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a power load prediction method based on an improved Elman neural network, which is characterized by firstly determining the measurement quantity of power load data of each day in a power supply area and utilizing the previous 365 days in the power supply areaNThe power load data are combined into an input data matrix and an output data matrix, and normalization processing is carried out; an improved Elman neural network model formed by connecting M-level Elman neural networks in series is built, the latest power load data of continuous K days is collected, the latest power load data is built into an input data vector and is subjected to normalization processing, and the obtained input vector is used as the Elman neural networkThe method has the advantages that the predicted power load can completely meet the actual power load demand.

Description

Power load prediction method based on improved Elman neural network
Technical Field
The invention relates to a power load prediction method, in particular to a power load prediction method based on an improved Elman neural network.
Background
The electric power system is composed of an electric power network and electric power users together, and the task of the electric power system is to provide uninterrupted, reliable and stable electric energy transmission for the majority of users, so that various power utilization requirements of the users are met. Because the production and use of electric power have particularity, that is, electric energy is difficult to store in large quantities, and the demand of various users for electric power changes from moment to moment, the electric power produced by the electric power system needs to change dynamically along with the change of the load of the system while the demand of electric power of the users is ensured sufficiently. This involves two tasks requirements: firstly, the capacity of the power equipment is exerted to the maximum extent, and the produced power can meet the requirements of users; secondly, on the premise of satisfying stable and sufficient power supply, the waste of power production is reduced as much as possible. Therefore, the power system load prediction technology is developing in this background and is the premise and basis for all the successful progress.
The core problem of power load prediction is the technical implementation problem of the prediction method, or how to build a corresponding prediction mathematical model according to the characteristics of power load change. Briefly, a predictive mathematical model is a model that establishes a relationship between input and output. However, the implementation of power load prediction has its own technical difficulties. For example, due to the increasingly complex structure of the power system and the non-linearity, time-varying characteristics and uncertainty characteristics of the power load change, it is difficult to directly apply a mathematical modeling method to establish a corresponding power load prediction model. In addition, the characteristics of the change of the power load can present different change characteristics according to holidays and working days, so that the technical difficulty is further improved for the prediction of the power load.
Fortunately, neural networks, especially deep neural network technologies, provide a new idea for solving prediction problems. In recent years, a deep neural network has been applied to various industries, and the main idea is to excavate nonlinear features related to output in input data layer by layer in a progressive manner through the deep neural network. Common deep neural networks include Convolutional Neural Networks (CNN), Deep Neural Networks (DNN), and stacked self-encoders (SAE), among others. However, none of these deep neural network models can take into account timing characteristics. As described above, the characteristic that the power load data changes with time is very obvious, and therefore it is very necessary to consider the time series characteristics of the power load data.
In the existing patent and scientific research literature, the Elman neural network has the capability of adapting to time-varying characteristics due to the existence of a feedback loop, and can directly and dynamically reflect the time sequence characteristics of a process system. Thus, the Elman neural network can make predictions of the electrical load. However, the traditional Elman neural network cannot realize deep feature analysis and extraction of the power load data, and the accuracy of the power load prediction remains to be questioned. In addition, the problem of predicting the power load requires attention to the area characteristics and the date characteristics. Different areas have respective electricity utilization characteristics, so that the electricity utilization characteristics of residential areas and industrial areas are completely different, and the electricity utilization requirements of working days and holidays are also different. Therefore, the power load prediction technology based on the Elman neural network needs to be further improved so as to meet the requirements of complex change characteristics of power load data and different application areas.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: how to utilize a multi-stage Elman neural network to gradually extract and predict the relevant time sequence characteristics of the future power load and add a network structure of a layer of output layer neurons, thereby realizing the accurate prediction of the power load data.
The technical scheme adopted by the invention for solving the technical problems is as follows: a power load prediction method based on an improved Elman neural network is characterized by comprising the following steps:
step 1, determining that the number of measured power load data of each day in a power supply area is D, and using 365 multiplied by D power load data z of N of the previous 365 days in the power supply area1,z2,…,zNRespectively establishing an input data matrix X epsilon Rn×JAnd the output data matrix Y ∈ Rn×DThe specific construction method is as follows:
Figure BDA0002746413510000021
wherein R isn×JReal number matrix, R, representing dimension n × Jn×DA matrix of real numbers representing dimensions n × D;
step 2, after updating the output data matrix Y according to the formula Y × Y δ, J column vectors X in the input data matrix X are updated according to the formula shown below1,x2,…,xJAnd D column vectors Y in the output data matrix Y1,y2,…,yDRespectively carrying out normalization treatment:
Figure BDA0002746413510000022
wherein, δ > 1 represents an amplification factor,
Figure BDA0002746413510000023
representing the j-th column vector, x, after normalizationj(min) and xj(max) respectively represent column vectors xjThe minimum and maximum values in, J ∈ {1,2, …, J }, y }d(min) and yd(max) represents the column vector y, respectivelydThe minimum value and the maximum value of (d),
Figure BDA0002746413510000024
representing the D column vector after normalization processing, wherein D belongs to {1,2, …, D }; step 3, mixing
Figure BDA0002746413510000025
Form an input matrix
Figure BDA0002746413510000026
And will be
Figure BDA0002746413510000027
Building an output matrix
Figure BDA0002746413510000028
Then, use u1,u2,…,unRepresenting an input matrix
Figure BDA00027464135100000210
N column vectors of, using v1,v2,…,vnRepresenting an output matrix
Figure BDA0002746413510000029
N column vectors of (a);
step 4, building an improved Elman neural network model formed by connecting M-level Elman neural networks in series, and determining the transfer function of the middle-layer neurons as f (x) 1/(1+ e)-x) And the number h of middle layer neurons of each stage of Elman neural network1,h2,…,hM(ii) a Wherein, the activation function ζ (x) of the neurons of the output layer is a linear function, and x represents a function argument;
step 5, sequentially training the 1 st-level Elman neural network, the 2 nd-level Elman neural network to the M th-level Elman neural network by using a BP algorithm, and reserving a middle layer weight coefficient W of an improved Elman neural network model1,W2,…,WMAnd a threshold value b1,b2,…,bMConnection weight V from receiving layer to intermediate layer1,V2,…,VMAnd a threshold value a1,a2,…,aMAnd output layer weight coefficients
Figure BDA0002746413510000031
And a threshold value
Figure BDA0002746413510000032
Step 6, according to the formula
Figure BDA0002746413510000033
Calculating an output vector y of an m-th-stage Elman neural network output layer1(m),y2(m),…,yN(m) and constructing an output estimation matrix
Figure BDA0002746413510000034
After that, the air conditioner is started to work,
7, repeating the step 6 until obtaining output estimation matrixes of all levels of Elman neural networks
Figure BDA0002746413510000035
Wherein M belongs to {1,2, …, M }, and i belongs to {1,2, …, n };
step 8, building a three-layer neural network model, wherein the number of neurons in an input layer is MD (M multiplied by D), the number of neurons in a hidden layer is H, the number of neurons in an output layer is D, an activation function of the neurons in the hidden layer is phi (x), and x represents a function independent variable;
step 9, using the matrix
Figure BDA0002746413510000036
N column vectors of (a) as input, with v1,v2,…,vnAs output, the weight coefficient W of the hidden layer neuron is obtained by training the BP algorithm again0∈RMD×HAnd a threshold value b0∈RH×1And weight coefficients for neurons in the output layer
Figure BDA0002746413510000037
And a threshold value
Figure BDA0002746413510000038
Step 10, collecting the latest continuous k days of power load data, and sequentially recording the latest continuous k days of power load data as
Figure BDA0002746413510000039
And constructs it into an input data vector
Figure BDA00027464135100000310
Then, z is normalized according to the following formula to obtain an input vector
Figure BDA00027464135100000311
Figure BDA00027464135100000312
In the above equation, z (j) represents the jth element in the input data vector z,
Figure BDA00027464135100000313
representing input vectors
Figure BDA00027464135100000314
J ∈ {1,2, …, J };
step 11, the input vector obtained in step 10
Figure BDA00027464135100000315
As an input of the 1 st-stage Elman neural network, an output vector c of the intermediate layer of the 1 st-stage Elman neural network is calculated according to the formula shown belowt(1) And output vectors of the output layer
Figure BDA00027464135100000316
After that, reinitializing m 2:
Figure BDA00027464135100000317
in the above formula, t represents the number of times prediction is performed, ct-1(1) The output vector of the middle layer of the 1 st-stage Elman neural network is shown when the t-1 st prediction is performed, and when the first prediction is performed, i.e. t is 1, the output vector is shown
Figure BDA0002746413510000041
Is a zero vector;
step 12, obtaining c by step 11t(m-1) and reacting it with
Figure BDA0002746413510000042
Are combined into a column vector zt(m) and then adding zt(m) as the input of the mth stage Elman neural network, calculating the output vector c of the middle layer of the mth stage Elman neural network according to the formula shown in the specificationt(m) and output vector of output layer
Figure BDA0002746413510000043
Figure BDA0002746413510000044
In the above formula, ct-1(m) represents the output vector of the middle layer of the m-th Elman neural network when the prediction is performed for the t-1 st time, and when the prediction is performed for the first time, i.e. t is 1, the output vector is calculated
Figure BDA0002746413510000045
Is a zero vector;
step 13, judging whether the condition M is less than M in the step 12; if yes, after setting m to m +1, returning to step 12; if not, M output vectors are obtained
Figure BDA0002746413510000046
Step 14, obtaining the product of step 13
Figure BDA0002746413510000047
Are combined into a column vector
Figure BDA0002746413510000048
Then according to the formula
Figure BDA0002746413510000049
Computing the output vector C of hidden neurons0
Step 15: according to the formula
Figure BDA00027464135100000410
Computing output vectors for output layer neurons
Figure BDA00027464135100000411
Then, the following formula is used for
Figure BDA00027464135100000412
Respectively carrying out anti-normalization processing on each element in the power load prediction value to obtain the predicted value y ∈ R of the power load in the next dayD×1
Figure BDA00027464135100000413
In the above formula, y (d) represents the d-th element in y,
Figure BDA00027464135100000414
to represent
Figure BDA00027464135100000415
The D element in (D), D ∈ {1,2, …, D }; and step 16, repeating the steps 10 to 15 to predict the next power load.
The training step of the step 5 is as follows:
step (5.1): the input layer of the 1 st-level Elman neural network is provided with J neurons, and the middle layer is provided with h neurons1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized bearing layer are any real numbers, the connection weight value and the threshold value from the initialized bearing layer to the middle layer are any real numbers, and the weight coefficient and the threshold value of the initialized middle layer are any real numbers;
step (5.2): by u1,u2,…,unAs input to the 1 st Elman neural network, and v1,v2,…,vnAs the output of the 1 st level Elman neural network, training by using BP algorithm to obtain the intermediate layer weight coefficient of the 1 st level Elman neural network
Figure BDA00027464135100000416
And a threshold value
Figure BDA00027464135100000417
Connection weight from receiving layer to intermediate layer
Figure BDA00027464135100000418
And a threshold value
Figure BDA00027464135100000419
And output layer weight coefficients
Figure BDA00027464135100000420
And a threshold value
Figure BDA00027464135100000421
After that, initializing m to 1;
step (5.3): calculating the output vector g of the m-th-order Elman neural network intermediate layer neuron according to the formula1(m),g2(m),…,gn(m):
Figure BDA0002746413510000051
In the above formula, i ∈ {1,2, …, n }, and when i ∈ 1, g is set0(m) is a zero vector;
step (5.4): the matrix is constructed according to the formula shown below
Figure BDA0002746413510000052
Figure BDA0002746413510000053
Wherein the content of the first and second substances,
Figure BDA0002746413510000054
is represented by (h)mA real number matrix of + J) x n dimensions;
step (5.5): the input layer of the (m +1) th-level Elman neural network has hm+ J neurons, with h in the middle layerm+1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized middle layer are any real numbers, the weight coefficient and the threshold value of the initialized output layer are any real numbers, and then the connection weight value and the threshold value from the receiving layer to the middle layer are initialized to be any real numbers;
step (5.6): taking n column vectors of the matrix Z as the input of the (m +1) th-level Elman neural network, and simultaneously taking the n column vectors as the input of the (m +1) th-level Elman neural networkv1,v2,…,vnAs the output of the (m +1) th level Elman neural network, and then training by using BP algorithm to obtain the intermediate layer weight coefficient of the (m +1) th level Elman neural network
Figure BDA0002746413510000055
And a threshold value
Figure BDA0002746413510000056
Connection weight from receiving layer to intermediate layer
Figure BDA0002746413510000057
And a threshold value
Figure BDA0002746413510000058
And output layer weight coefficients
Figure BDA0002746413510000059
And a threshold value
Figure BDA00027464135100000510
Wherein
Figure BDA00027464135100000511
Is represented by (h)m+J)×hm+1The matrix of real numbers of the dimension is,
Figure BDA00027464135100000512
represents hm+1A real number vector of x 1 dimension;
step (5.7): judging whether the condition M +1 is more than M; if yes, returning to the step (5.3) after setting m to m + 1; and if not, finishing the training of the improved Elman neural network model.
Compared with the prior art, the method has the advantages that when the prediction model is established, the power load data needing to be predicted is actively amplified, so that the predicted power load can completely meet the actual power load demand. Secondly, the method adopts a form of connecting a plurality of levels of Elman neural networks in series, and each level of Elman neural networks all use the original input data, thereby avoiding the problem of information loss during the step-by-step extraction process of the characteristics. Finally, in the following specific embodiment, the reliability and feasibility of the method are verified through comparison of the power load prediction result and the actual power load demand.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of power load data over time;
FIG. 3 is a schematic diagram of the structure of the improved Elman neural network of the method of the present invention;
FIG. 4 is a graph of predicted future power load results for a future day.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
A power load prediction method based on an improved Elman neural network comprises the following steps:
step 1, determining the number of measured power load data of each day in a power supply area, specifically D, and using 365 × D power load data z of N of the previous 365 days in the power supply area1,z2,…,zNRespectively establishing an input data matrix X epsilon Rn×mAnd the output data matrix Y ∈ Rn×DThe specific construction method is as follows:
Figure BDA0002746413510000061
wherein R isn×JAnd Rn×DReal number matrixes respectively representing dimensions nxj and dimensions nxd, wherein N is N- (k +1) D +1, k is input accumulation days and is required to be more than or equal to 3, kD is kxd and represents a product of k and D, (k +1) D is (k +1) xd represents a product of k +1 and D, and J is kD;
step 2, after updating the output data matrix Y according to the formula Y × Y δ, J column vectors X in the input data matrix X are updated according to the formula shown below1,x2,…,xJAnd D column vectors Y in the output data matrix Y1,y2,…,yDRespectively performingPerforming normalization treatment:
Figure BDA0002746413510000062
wherein, δ > 1 represents an amplification factor,
Figure BDA0002746413510000063
representing the j-th column vector, x, after normalizationj(min) and xj(max) respectively represent column vectors xjThe minimum and maximum values in, J ∈ {1,2, …, J }, y }d(min) and yd(max) represents the column vector y, respectivelydThe minimum value and the maximum value of (d),
Figure BDA0002746413510000064
representing the D column vector after normalization processing, wherein D belongs to {1,2, …, D };
step 3, mixing
Figure BDA0002746413510000065
Form an input matrix
Figure BDA0002746413510000066
And will be
Figure BDA0002746413510000067
Building an output matrix
Figure BDA0002746413510000068
Then, use u1,u2,…,unRepresenting an input matrix
Figure BDA0002746413510000069
N column vectors of, using v1,v2,…,vnRepresenting an output matrix
Figure BDA00027464135100000610
The n column vectors of (1), wherein the superscript T represents the transpose of the matrix or vector;
step 4, building oneAn improved Elman neural network model composed of M-level Elman neural networks in series connection is adopted, and the transfer function of the middle-layer neurons is determined to be f (x) 1/(1+ e)-x) The activation function zeta (x) of the neuron in the output layer is a linear function, and the number h of the neuron in the middle layer of each stage Elman neural network1,h2,…,hM(ii) a Wherein x represents a function argument;
step 5, sequentially training the 1 st-level Elman neural network, the 2 nd-level Elman neural network to the M th-level Elman neural network by using a BP algorithm, and reserving a middle layer weight coefficient W of an improved Elman neural network model1,W2,…,WMAnd a threshold value b1,b2,…,bMConnection weight V from receiving layer to intermediate layer1,V2,…,VMAnd a threshold value a1,a2,…,aMAnd output layer weight coefficients
Figure BDA0002746413510000071
And a threshold value
Figure BDA0002746413510000072
The specific step 5 of training the Elman neural network by using the BP algorithm comprises the following steps: step (5.1): the input layer of the 1 st-level Elman neural network is provided with J neurons, and the middle layer is provided with h neurons1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized bearing layer are any real numbers, the connection weight value and the threshold value from the initialized bearing layer to the middle layer are any real numbers, and the weight coefficient and the threshold value of the initialized middle layer are any real numbers;
step (5.2): by u1,u2,…,unAs input to the 1 st Elman neural network, and v1,v2,…,vnAs the output of the 1 st level Elman neural network, training by using BP algorithm to obtain the intermediate layer weight coefficient of the 1 st level Elman neural network
Figure BDA0002746413510000073
And a threshold value
Figure BDA0002746413510000074
Connection weight from receiving layer to intermediate layer
Figure BDA0002746413510000075
And a threshold value
Figure BDA0002746413510000076
And output layer weight coefficients
Figure BDA0002746413510000077
And a threshold value
Figure BDA0002746413510000078
After that, initializing m to 1;
step (5.3): calculating the output vector g of the m-th-order Elman neural network intermediate layer neuron according to the formula1(m),g2(m),…,gn(m):
Figure BDA0002746413510000079
In the above formula, i ∈ {1,2, …, n }, and when i ∈ 1, g is set0(m) is a zero vector;
step (5.4): the matrix is constructed according to the formula shown below
Figure BDA00027464135100000710
Figure BDA00027464135100000711
Wherein the content of the first and second substances,
Figure BDA00027464135100000712
is represented by (h)mA real number matrix of + J) x n dimensions;
step (5.5): the input layer of the (m +1) th-level Elman neural network has hm+ J neurons, with h in the middle layerm+1Each neuron, the output layer has D neurons, and the weight coefficient of the intermediate layer is initializedInitializing a weight coefficient of an output layer and a threshold as any real number, and then initializing a connection weight from a receiving layer to a middle layer and the threshold as any real number;
step (5.6): taking n column vectors of the matrix Z as the input of the (m +1) th-level Elman neural network, and simultaneously taking v1,v2,…,vnAs the output of the (m +1) th level Elman neural network, and then training by using BP algorithm to obtain the intermediate layer weight coefficient of the (m +1) th level Elman neural network
Figure BDA00027464135100000713
And a threshold value
Figure BDA00027464135100000714
Connection weight from receiving layer to intermediate layer
Figure BDA00027464135100000715
And a threshold value
Figure BDA00027464135100000716
And output layer weight coefficients
Figure BDA00027464135100000717
And a threshold value
Figure BDA00027464135100000718
Wherein
Figure BDA00027464135100000719
Is represented by (h)m+J)×hm+1The matrix of real numbers of the dimension is,
Figure BDA00027464135100000720
represents hm+1A real number vector of x 1 dimension;
step (5.7): judging whether the condition M +1 is more than M; if yes, returning to the step (5.3) after setting m to m + 1; if not, finishing the training of the improved Elman neural network model;
step 6, according to the formula
Figure BDA0002746413510000081
Calculating an output vector y of an m-th-stage Elman neural network output layer1(m),y2(m),…,yN(m) and constructing an output estimation matrix
Figure BDA0002746413510000082
7, repeating the step 6 until obtaining output estimation matrixes of all levels of Elman neural networks
Figure BDA0002746413510000083
Wherein M belongs to {1,2, …, M }, and i belongs to {1,2, …, n };
step 8, building a three-layer neural network model, wherein the number of neurons in an input layer is MD (M × D), the number of neurons in a hidden layer is H, the number of neurons in an output layer is D, and determining that the activation function of the neurons in the hidden layer is phi (x), and the activation function of the neurons in the output layer is zeta (x); wherein x represents a function argument;
step 9, using the matrix
Figure BDA0002746413510000084
N column vectors of (a) as input, with v1,v2,…,vnAs output, the weight coefficient W of the hidden layer neuron is obtained by training the BP algorithm again0∈RMD×HAnd a threshold value b0∈RH×1And weight coefficients for neurons in the output layer
Figure BDA0002746413510000085
And a threshold value
Figure BDA0002746413510000086
Step 10, collecting the latest continuous k days of power load data, and sequentially recording the latest continuous k days of power load data as
Figure BDA0002746413510000087
And constructs it into an input data vector
Figure BDA0002746413510000088
Then, z is normalized according to the following formula to obtain an input vector
Figure BDA0002746413510000089
Figure BDA00027464135100000810
In the above equation, z (j) represents the jth element in the input data vector z,
Figure BDA00027464135100000811
representing input vectors
Figure BDA00027464135100000812
J ∈ {1,2, …, J };
step 11, with
Figure BDA00027464135100000816
As an input of the 1 st-stage Elman neural network, an output vector c of the intermediate layer of the 1 st-stage Elman neural network is calculated according to the formula shown belowt(1) And output vectors of the output layer
Figure BDA00027464135100000813
After that, reinitializing m 2:
Figure BDA00027464135100000814
in the above formula, t represents the number of times prediction is performed, ct-1(1) The output vector of the middle layer of the 1 st-stage Elman neural network is shown when the t-1 st prediction is performed, and when the first prediction is performed, i.e. t is 1, the output vector is shown
Figure BDA00027464135100000815
Is a zero vector;
step 12, ct(m-1) and
Figure BDA00027464135100000817
are combined into a column vector ztAfter (m), adding zt(m) as the input of the mth stage Elman neural network, calculating the output vector c of the middle layer of the mth stage Elman neural network according to the formula shown in the specificationt(m) and output vector of output layer
Figure BDA0002746413510000091
Figure BDA0002746413510000092
In the above formula, ct-1(m) represents the output vector of the middle layer of the m-th Elman neural network when the prediction is performed for the t-1 st time, and when the prediction is performed for the first time, i.e. t is 1, the output vector is calculated
Figure BDA0002746413510000093
Is a zero vector;
step 13, judging whether the condition M is less than M; if yes, after m is set to m +1, returning to the step (10.2); if not, M output vectors are obtained
Figure BDA0002746413510000094
Step 14, mixing
Figure BDA0002746413510000095
Are combined into a column vector
Figure BDA0002746413510000096
Then according to the formula
Figure BDA0002746413510000097
Computing the output vector C of hidden neurons0
Step 15: according to the formula
Figure BDA0002746413510000098
Computing output vectors for output layer neurons
Figure BDA0002746413510000099
Then, the following formula is used for
Figure BDA00027464135100000910
Respectively carrying out anti-normalization processing on each element in the power load prediction value to obtain the predicted value y ∈ R of the power load in the next dayD×1
Figure BDA00027464135100000911
In the above formula, y (d) represents the d-th element in y,
Figure BDA00027464135100000912
to represent
Figure BDA00027464135100000913
The D element of (D), D ∈ {1,2, …, D };
and step 16, repeating the steps 9 to 14 to predict the power load next time.
The above-mentioned embodiments are only preferred embodiments of the present invention, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (2)

1. A power load prediction method based on an improved Elman neural network is characterized by comprising the following steps:
step 1, determining that the number of measured power load data of each day in a power supply area is D, and using 365 multiplied by D power load data z of N of the previous 365 days in the power supply area1,z2,…,zNRespectively establishing an input data matrix X epsilon Rn×JAnd the output data matrix Y ∈ Rn×DThe specific construction method is as follows:
Figure FDA0002746413500000011
wherein R isn×JReal number matrix, R, representing dimension n × Jn×DA matrix of real numbers representing dimensions n × D;
step 2, after updating the output data matrix Y according to the formula Y × Y δ, J column vectors X in the input data matrix X are updated according to the formula shown below1,x2,…,xJAnd D column vectors Y in the output data matrix Y1,y2,…,yDRespectively carrying out normalization treatment:
Figure FDA0002746413500000012
wherein, δ > 1 represents an amplification factor,
Figure FDA0002746413500000013
representing the j-th column vector, x, after normalizationj(min) and xj(max) respectively represent column vectors xjThe minimum and maximum values in, J ∈ {1,2, …, J }, y }d(min) and yd(max) represents the column vector y, respectivelydThe minimum value and the maximum value of (d),
Figure FDA0002746413500000014
representing the D column vector after normalization processing, wherein D belongs to {1,2, …, D };
step 3, mixing
Figure FDA0002746413500000015
Form an input matrix
Figure FDA0002746413500000016
And will be
Figure FDA0002746413500000017
Building an output matrix
Figure FDA0002746413500000018
Then, use u1,u2,…,unRepresenting an input matrix
Figure FDA0002746413500000019
N column vectors of, using v1,v2,…,vnRepresenting an output matrix
Figure FDA00027464135000000110
N column vectors of (a);
step 4, building an improved Elman neural network model formed by connecting M-level Elman neural networks in series, and determining the transfer function of the middle-layer neurons as f (x) 1/(1+ e)-x) And the number h of middle layer neurons of each stage of Elman neural network1,h2,…,hM(ii) a Wherein, the activation function ζ (x) of the neurons of the output layer is a linear function, and x represents a function argument;
step 5, sequentially training the 1 st-level Elman neural network, the 2 nd-level Elman neural network to the M th-level Elman neural network by using a BP algorithm, and reserving a middle layer weight coefficient W of an improved Elman neural network model1,W2,…,WMAnd a threshold value b1,b2,…,bMConnection weight V from receiving layer to intermediate layer1,V2,…,VMAnd a threshold value a1,a2,…,aMAnd output layer weight coefficients
Figure FDA00027464135000000111
And a threshold value
Figure FDA00027464135000000112
Step 6, according to the formula
Figure FDA00027464135000000113
Computing m-th Elman spiritOutput vector y via the network output layer1(m),y2(m),…,yN(m) and constructing an output estimation matrix
Figure FDA00027464135000000114
After that, the air conditioner is started to work,
7, repeating the step 6 until obtaining output estimation matrixes of all levels of Elman neural networks
Figure FDA00027464135000000115
Wherein M belongs to {1,2, …, M }, and i belongs to {1,2, …, n };
step 8, building a three-layer neural network model, wherein the number of neurons in an input layer is MD (M multiplied by D), the number of neurons in a hidden layer is H, the number of neurons in an output layer is D, an activation function of the neurons in the hidden layer is phi (x), and x represents a function independent variable;
step 9, using the matrix
Figure FDA0002746413500000021
N column vectors of (a) as input, with v1,v2,…,vnAs output, the weight coefficient W of the hidden layer neuron is obtained by training the BP algorithm again0∈RMD×HAnd a threshold value b0∈RH×1And weight coefficients for neurons in the output layer
Figure FDA0002746413500000022
And a threshold value
Figure FDA0002746413500000023
Step 10, collecting the latest continuous k days of power load data, and sequentially recording the latest continuous k days of power load data as
Figure FDA0002746413500000024
And constructs it into an input data vector
Figure FDA0002746413500000025
Then, z is normalized according to the following formula to obtain an input vector
Figure FDA0002746413500000026
Figure FDA0002746413500000027
In the above equation, z (j) represents the jth element in the input data vector z,
Figure FDA0002746413500000028
representing input vectors
Figure FDA0002746413500000029
J ∈ {1,2, …, J };
step 11, the input vector obtained in step 10
Figure FDA00027464135000000210
As an input of the 1 st-stage Elman neural network, an output vector c of the intermediate layer of the 1 st-stage Elman neural network is calculated according to the formula shown belowt(1) And output vectors of the output layer
Figure FDA00027464135000000211
After that, reinitializing m 2:
Figure FDA00027464135000000212
in the above formula, t represents the number of times prediction is performed, ct-1(1) The output vector of the middle layer of the 1 st-stage Elman neural network is shown when the t-1 st prediction is performed, and when the first prediction is performed, i.e. t is 1, the output vector is shown
Figure FDA00027464135000000213
Is a zero vector;
step 12, obtaining c by step 11t(m-1) and reacting it with
Figure FDA00027464135000000214
Are combined into a column vector zt(m) and then adding zt(m) as the input of the mth stage Elman neural network, calculating the output vector c of the middle layer of the mth stage Elman neural network according to the formula shown in the specificationt(m) and output vector of output layer
Figure FDA00027464135000000215
Figure FDA00027464135000000216
In the above formula, ct-1(m) represents the output vector of the middle layer of the m-th Elman neural network when the prediction is performed for the t-1 st time, and when the prediction is performed for the first time, i.e. t is 1, the output vector is calculated
Figure FDA00027464135000000217
Is a zero vector;
step 13, judging whether the condition M is less than M in the step 12; if yes, after setting m to m +1, returning to step 12; if not, M output vectors are obtained
Figure FDA0002746413500000031
Step 14, obtaining the product of step 13
Figure FDA0002746413500000032
Are combined into a column vector
Figure FDA0002746413500000033
Then according to the formula
Figure FDA0002746413500000034
Computing the output vector C of hidden neurons0
Step 15: according to the formula
Figure FDA0002746413500000035
Computing output vectors for output layer neurons
Figure FDA0002746413500000036
Then, the following formula is used for
Figure FDA0002746413500000037
Respectively carrying out anti-normalization processing on each element in the power load prediction value to obtain the predicted value y ∈ R of the power load in the next dayD×1
Figure FDA0002746413500000038
In the above formula, y (d) represents the d-th element in y,
Figure FDA0002746413500000039
to represent
Figure FDA00027464135000000310
The D element in (D), D ∈ {1,2, …, D };
and step 16, repeating the steps 10 to 15 to predict the next power load.
2. The method for predicting the power load based on the improved Elman neural network as claimed in claim 1, wherein the training step of the step 5 is as follows:
step (5.1): the input layer of the 1 st-level Elman neural network is provided with J neurons, and the middle layer is provided with h neurons1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized bearing layer are any real numbers, the connection weight value and the threshold value from the initialized bearing layer to the middle layer are any real numbers, and the weight coefficient and the threshold value of the initialized middle layer are any real numbers;
step (ii) of(5.2): by u1,u2,…,unAs input to the 1 st Elman neural network, and v1,v2,…,vnAs the output of the 1 st level Elman neural network, training by using BP algorithm to obtain the intermediate layer weight coefficient of the 1 st level Elman neural network
Figure FDA00027464135000000311
And a threshold value
Figure FDA00027464135000000312
Connection weight from receiving layer to intermediate layer
Figure FDA00027464135000000313
And a threshold value
Figure FDA00027464135000000314
And output layer weight coefficients
Figure FDA00027464135000000315
And a threshold value
Figure FDA00027464135000000316
After that, initializing m to 1;
step (5.3): calculating the output vector g of the m-th-order Elman neural network intermediate layer neuron according to the formula1(m),g2(m),…,gn(m):
Figure FDA00027464135000000317
In the above formula, i ∈ {1,2, …, n }, and when i ∈ 1, g is set0(m) is a zero vector;
step (5.4): the matrix is constructed according to the formula shown below
Figure FDA00027464135000000318
Figure FDA00027464135000000319
Wherein the content of the first and second substances,
Figure FDA0002746413500000041
is represented by (h)mA real number matrix of + J) x n dimensions;
step (5.5): the input layer of the (m +1) th-level Elman neural network has hm+ J neurons, with h in the middle layerm+1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized middle layer are any real numbers, the weight coefficient and the threshold value of the initialized output layer are any real numbers, and then the connection weight value and the threshold value from the receiving layer to the middle layer are initialized to be any real numbers;
step (5.6): taking n column vectors of the matrix Z as the input of the (m +1) th-level Elman neural network, and simultaneously taking v1,v2,…,vnAs the output of the (m +1) th level Elman neural network, and then training by using BP algorithm to obtain the intermediate layer weight coefficient of the (m +1) th level Elman neural network
Figure FDA0002746413500000042
And a threshold value
Figure FDA0002746413500000043
Connection weight from receiving layer to intermediate layer
Figure FDA0002746413500000044
And a threshold value
Figure FDA0002746413500000045
And output layer weight coefficients
Figure FDA0002746413500000046
And a threshold value
Figure FDA0002746413500000047
Wherein
Figure FDA0002746413500000048
Is represented by (h)m+J)×hm+1The matrix of real numbers of the dimension is,
Figure FDA0002746413500000049
represents hm+1A real number vector of x 1 dimension;
step (5.7): judging whether the condition M +1 is more than M; if yes, returning to the step (5.3) after setting m to m + 1; and if not, finishing the training of the improved Elman neural network model.
CN202011168134.6A 2020-10-28 2020-10-28 Power load prediction method based on improved Elman neural network Active CN112200383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011168134.6A CN112200383B (en) 2020-10-28 2020-10-28 Power load prediction method based on improved Elman neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011168134.6A CN112200383B (en) 2020-10-28 2020-10-28 Power load prediction method based on improved Elman neural network

Publications (2)

Publication Number Publication Date
CN112200383A true CN112200383A (en) 2021-01-08
CN112200383B CN112200383B (en) 2024-05-17

Family

ID=74011684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011168134.6A Active CN112200383B (en) 2020-10-28 2020-10-28 Power load prediction method based on improved Elman neural network

Country Status (1)

Country Link
CN (1) CN112200383B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887820A (en) * 2021-10-20 2022-01-04 国网浙江省电力有限公司 Method and device for predicting fault of electric power spot business system, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631517A (en) * 2015-12-17 2016-06-01 河海大学 Photovoltaic power generation power short term prediction method based on mind evolution Elman neural network
CN106651020A (en) * 2016-12-16 2017-05-10 燕山大学 Short-term power load prediction method based on big data reduction
CN111028100A (en) * 2019-11-29 2020-04-17 南方电网能源发展研究院有限责任公司 Refined short-term load prediction method, device and medium considering meteorological factors
CN111428926A (en) * 2020-03-23 2020-07-17 国网江苏省电力有限公司镇江供电分公司 Regional power load prediction method considering meteorological factors
CN111912875A (en) * 2020-06-23 2020-11-10 宁波大学 Fractionating tower benzene content soft measurement method based on stack type Elman neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631517A (en) * 2015-12-17 2016-06-01 河海大学 Photovoltaic power generation power short term prediction method based on mind evolution Elman neural network
CN106651020A (en) * 2016-12-16 2017-05-10 燕山大学 Short-term power load prediction method based on big data reduction
CN111028100A (en) * 2019-11-29 2020-04-17 南方电网能源发展研究院有限责任公司 Refined short-term load prediction method, device and medium considering meteorological factors
CN111428926A (en) * 2020-03-23 2020-07-17 国网江苏省电力有限公司镇江供电分公司 Regional power load prediction method considering meteorological factors
CN111912875A (en) * 2020-06-23 2020-11-10 宁波大学 Fractionating tower benzene content soft measurement method based on stack type Elman neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
李志恒等: "基于改进 BP 神经网络的中长期电力负荷预测算法设计", 《自动化与仪器仪表》, no. 10, pages 23 - 25 *
王泽;曹莉莎;: "基于Elman神经网络的电厂数据预测", 内蒙古科技与经济, no. 03, pages 93 - 94 *
苏宜靖;谷炜;赵依;董立;蒋琛;于竞哲;: "考虑气象因子的区域电网梅雨期负荷预测", 浙江电力, no. 12 *
赵铭扬等: "改进的Elman神经网络在短期电力负荷预测中的应用", 宁夏工程技术, no. 02, pages 115 - 117 *
郭姣姣: "基于改进Elman神经网络的短期电力负荷预测", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》, no. 2015, pages 30 - 42 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887820A (en) * 2021-10-20 2022-01-04 国网浙江省电力有限公司 Method and device for predicting fault of electric power spot business system, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112200383B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
Ullah et al. Short-term prediction of residential power energy consumption via CNN and multi-layer bi-directional LSTM networks
CN112116153B (en) Park multivariate load joint prediction method coupling Copula and stacked LSTM network
CN109840154B (en) Task dependency-based computing migration method in mobile cloud environment
Jian et al. A cloud edge-based two-level hybrid scheduling learning model in cloud manufacturing
Cai et al. An efficient approach for electric load forecasting using distributed ART (adaptive resonance theory) & HS-ARTMAP (Hyper-spherical ARTMAP network) neural network
CN112685657B (en) Conversation social recommendation method based on multi-mode cross fusion graph network
CN116663419A (en) Sensorless equipment fault prediction method based on optimized Elman neural network
CN111325340A (en) Information network relation prediction method and system
CN110796293A (en) Power load prediction method
CN112200383A (en) Power load prediction method based on improved Elman neural network
CN114745725B (en) Resource allocation management system based on edge computing industrial Internet of things
CN114219074A (en) Wireless communication network resource allocation algorithm dynamically adjusted according to requirements
CN114154688A (en) Short-term power prediction method for photovoltaic power station
CN117114776A (en) Price reporting method for provincial day-ahead spot transaction
Zhang et al. Improving the accuracy of load forecasting for campus buildings based on federated learning
CN112101651B (en) Electric energy network coordination control method, system and information data processing terminal
CN116702598A (en) Training method, device, equipment and storage medium for building achievement prediction model
Bashir et al. Short-term load forecasting using artificial neural network based on particle swarm optimization algorithm
Biju et al. Electric load demand forecasting with RNN cell generated by DARTS
CN114611823A (en) Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park
CN113962454A (en) LSTM energy consumption prediction method based on dual feature selection and particle swarm optimization
Ying Application of support vector regression algorithm in colleges recruiting students prediction
CN112100831B (en) Power grid capacity expansion planning system and method based on relaxation constraint
CN115834247B (en) Edge computing trust evaluation method based on blockchain
CN110852480B (en) Electric power data completion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant