CN112200383A - Power load prediction method based on improved Elman neural network - Google Patents
Power load prediction method based on improved Elman neural network Download PDFInfo
- Publication number
- CN112200383A CN112200383A CN202011168134.6A CN202011168134A CN112200383A CN 112200383 A CN112200383 A CN 112200383A CN 202011168134 A CN202011168134 A CN 202011168134A CN 112200383 A CN112200383 A CN 112200383A
- Authority
- CN
- China
- Prior art keywords
- layer
- output
- neural network
- elman neural
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 89
- 238000000034 method Methods 0.000 title claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 105
- 239000011159 matrix material Substances 0.000 claims abstract description 52
- 238000003062 neural network model Methods 0.000 claims abstract description 14
- 238000010606 normalization Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 8
- 210000002569 neuron Anatomy 0.000 claims description 45
- 238000012549 training Methods 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 7
- 230000003321 amplification Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000012886 linear function Methods 0.000 claims description 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000001537 neural effect Effects 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 abstract 1
- 230000008859 change Effects 0.000 description 7
- 230000005611 electricity Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Economics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Artificial Intelligence (AREA)
- Mathematical Optimization (AREA)
- General Business, Economics & Management (AREA)
- Mathematical Analysis (AREA)
- Marketing (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Primary Health Care (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- Operations Research (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a power load prediction method based on an improved Elman neural network, which is characterized by firstly determining the measurement quantity of power load data of each day in a power supply area and utilizing the previous 365 days in the power supply areaNThe power load data are combined into an input data matrix and an output data matrix, and normalization processing is carried out; an improved Elman neural network model formed by connecting M-level Elman neural networks in series is built, the latest power load data of continuous K days is collected, the latest power load data is built into an input data vector and is subjected to normalization processing, and the obtained input vector is used as the Elman neural networkThe method has the advantages that the predicted power load can completely meet the actual power load demand.
Description
Technical Field
The invention relates to a power load prediction method, in particular to a power load prediction method based on an improved Elman neural network.
Background
The electric power system is composed of an electric power network and electric power users together, and the task of the electric power system is to provide uninterrupted, reliable and stable electric energy transmission for the majority of users, so that various power utilization requirements of the users are met. Because the production and use of electric power have particularity, that is, electric energy is difficult to store in large quantities, and the demand of various users for electric power changes from moment to moment, the electric power produced by the electric power system needs to change dynamically along with the change of the load of the system while the demand of electric power of the users is ensured sufficiently. This involves two tasks requirements: firstly, the capacity of the power equipment is exerted to the maximum extent, and the produced power can meet the requirements of users; secondly, on the premise of satisfying stable and sufficient power supply, the waste of power production is reduced as much as possible. Therefore, the power system load prediction technology is developing in this background and is the premise and basis for all the successful progress.
The core problem of power load prediction is the technical implementation problem of the prediction method, or how to build a corresponding prediction mathematical model according to the characteristics of power load change. Briefly, a predictive mathematical model is a model that establishes a relationship between input and output. However, the implementation of power load prediction has its own technical difficulties. For example, due to the increasingly complex structure of the power system and the non-linearity, time-varying characteristics and uncertainty characteristics of the power load change, it is difficult to directly apply a mathematical modeling method to establish a corresponding power load prediction model. In addition, the characteristics of the change of the power load can present different change characteristics according to holidays and working days, so that the technical difficulty is further improved for the prediction of the power load.
Fortunately, neural networks, especially deep neural network technologies, provide a new idea for solving prediction problems. In recent years, a deep neural network has been applied to various industries, and the main idea is to excavate nonlinear features related to output in input data layer by layer in a progressive manner through the deep neural network. Common deep neural networks include Convolutional Neural Networks (CNN), Deep Neural Networks (DNN), and stacked self-encoders (SAE), among others. However, none of these deep neural network models can take into account timing characteristics. As described above, the characteristic that the power load data changes with time is very obvious, and therefore it is very necessary to consider the time series characteristics of the power load data.
In the existing patent and scientific research literature, the Elman neural network has the capability of adapting to time-varying characteristics due to the existence of a feedback loop, and can directly and dynamically reflect the time sequence characteristics of a process system. Thus, the Elman neural network can make predictions of the electrical load. However, the traditional Elman neural network cannot realize deep feature analysis and extraction of the power load data, and the accuracy of the power load prediction remains to be questioned. In addition, the problem of predicting the power load requires attention to the area characteristics and the date characteristics. Different areas have respective electricity utilization characteristics, so that the electricity utilization characteristics of residential areas and industrial areas are completely different, and the electricity utilization requirements of working days and holidays are also different. Therefore, the power load prediction technology based on the Elman neural network needs to be further improved so as to meet the requirements of complex change characteristics of power load data and different application areas.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: how to utilize a multi-stage Elman neural network to gradually extract and predict the relevant time sequence characteristics of the future power load and add a network structure of a layer of output layer neurons, thereby realizing the accurate prediction of the power load data.
The technical scheme adopted by the invention for solving the technical problems is as follows: a power load prediction method based on an improved Elman neural network is characterized by comprising the following steps:
step 1, determining that the number of measured power load data of each day in a power supply area is D, and using 365 multiplied by D power load data z of N of the previous 365 days in the power supply area1,z2,…,zNRespectively establishing an input data matrix X epsilon Rn×JAnd the output data matrix Y ∈ Rn×DThe specific construction method is as follows:
wherein R isn×JReal number matrix, R, representing dimension n × Jn×DA matrix of real numbers representing dimensions n × D;
step 2, after updating the output data matrix Y according to the formula Y × Y δ, J column vectors X in the input data matrix X are updated according to the formula shown below1,x2,…,xJAnd D column vectors Y in the output data matrix Y1,y2,…,yDRespectively carrying out normalization treatment:
wherein, δ > 1 represents an amplification factor,representing the j-th column vector, x, after normalizationj(min) and xj(max) respectively represent column vectors xjThe minimum and maximum values in, J ∈ {1,2, …, J }, y }d(min) and yd(max) represents the column vector y, respectivelydThe minimum value and the maximum value of (d),representing the D column vector after normalization processing, wherein D belongs to {1,2, …, D }; step 3, mixingForm an input matrixAnd will beBuilding an output matrixThen, use u1,u2,…,unRepresenting an input matrixN column vectors of, using v1,v2,…,vnRepresenting an output matrixN column vectors of (a);
step 4, building an improved Elman neural network model formed by connecting M-level Elman neural networks in series, and determining the transfer function of the middle-layer neurons as f (x) 1/(1+ e)-x) And the number h of middle layer neurons of each stage of Elman neural network1,h2,…,hM(ii) a Wherein, the activation function ζ (x) of the neurons of the output layer is a linear function, and x represents a function argument;
step 5, sequentially training the 1 st-level Elman neural network, the 2 nd-level Elman neural network to the M th-level Elman neural network by using a BP algorithm, and reserving a middle layer weight coefficient W of an improved Elman neural network model1,W2,…,WMAnd a threshold value b1,b2,…,bMConnection weight V from receiving layer to intermediate layer1,V2,…,VMAnd a threshold value a1,a2,…,aMAnd output layer weight coefficientsAnd a threshold value
Step 6, according to the formulaCalculating an output vector y of an m-th-stage Elman neural network output layer1(m),y2(m),…,yN(m) and constructing an output estimation matrixAfter that, the air conditioner is started to work,
7, repeating the step 6 until obtaining output estimation matrixes of all levels of Elman neural networksWherein M belongs to {1,2, …, M }, and i belongs to {1,2, …, n };
step 8, building a three-layer neural network model, wherein the number of neurons in an input layer is MD (M multiplied by D), the number of neurons in a hidden layer is H, the number of neurons in an output layer is D, an activation function of the neurons in the hidden layer is phi (x), and x represents a function independent variable;
step 9, using the matrixN column vectors of (a) as input, with v1,v2,…,vnAs output, the weight coefficient W of the hidden layer neuron is obtained by training the BP algorithm again0∈RMD×HAnd a threshold value b0∈RH×1And weight coefficients for neurons in the output layerAnd a threshold value
Step 10, collecting the latest continuous k days of power load data, and sequentially recording the latest continuous k days of power load data asAnd constructs it into an input data vectorThen, z is normalized according to the following formula to obtain an input vector
In the above equation, z (j) represents the jth element in the input data vector z,representing input vectorsJ ∈ {1,2, …, J };
step 11, the input vector obtained in step 10As an input of the 1 st-stage Elman neural network, an output vector c of the intermediate layer of the 1 st-stage Elman neural network is calculated according to the formula shown belowt(1) And output vectors of the output layerAfter that, reinitializing m 2:
in the above formula, t represents the number of times prediction is performed, ct-1(1) The output vector of the middle layer of the 1 st-stage Elman neural network is shown when the t-1 st prediction is performed, and when the first prediction is performed, i.e. t is 1, the output vector is shownIs a zero vector;
step 12, obtaining c by step 11t(m-1) and reacting it withAre combined into a column vector zt(m) and then adding zt(m) as the input of the mth stage Elman neural network, calculating the output vector c of the middle layer of the mth stage Elman neural network according to the formula shown in the specificationt(m) and output vector of output layer
In the above formula, ct-1(m) represents the output vector of the middle layer of the m-th Elman neural network when the prediction is performed for the t-1 st time, and when the prediction is performed for the first time, i.e. t is 1, the output vector is calculatedIs a zero vector;
step 13, judging whether the condition M is less than M in the step 12; if yes, after setting m to m +1, returning to step 12; if not, M output vectors are obtained
Step 14, obtaining the product of step 13Are combined into a column vectorThen according to the formulaComputing the output vector C of hidden neurons0;
Step 15: according to the formulaComputing output vectors for output layer neuronsThen, the following formula is used forRespectively carrying out anti-normalization processing on each element in the power load prediction value to obtain the predicted value y ∈ R of the power load in the next dayD×1:
In the above formula, y (d) represents the d-th element in y,to representThe D element in (D), D ∈ {1,2, …, D }; and step 16, repeating the steps 10 to 15 to predict the next power load.
The training step of the step 5 is as follows:
step (5.1): the input layer of the 1 st-level Elman neural network is provided with J neurons, and the middle layer is provided with h neurons1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized bearing layer are any real numbers, the connection weight value and the threshold value from the initialized bearing layer to the middle layer are any real numbers, and the weight coefficient and the threshold value of the initialized middle layer are any real numbers;
step (5.2): by u1,u2,…,unAs input to the 1 st Elman neural network, and v1,v2,…,vnAs the output of the 1 st level Elman neural network, training by using BP algorithm to obtain the intermediate layer weight coefficient of the 1 st level Elman neural networkAnd a threshold valueConnection weight from receiving layer to intermediate layerAnd a threshold valueAnd output layer weight coefficientsAnd a threshold valueAfter that, initializing m to 1;
step (5.3): calculating the output vector g of the m-th-order Elman neural network intermediate layer neuron according to the formula1(m),g2(m),…,gn(m):
In the above formula, i ∈ {1,2, …, n }, and when i ∈ 1, g is set0(m) is a zero vector;
Wherein the content of the first and second substances,is represented by (h)mA real number matrix of + J) x n dimensions;
step (5.5): the input layer of the (m +1) th-level Elman neural network has hm+ J neurons, with h in the middle layerm+1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized middle layer are any real numbers, the weight coefficient and the threshold value of the initialized output layer are any real numbers, and then the connection weight value and the threshold value from the receiving layer to the middle layer are initialized to be any real numbers;
step (5.6): taking n column vectors of the matrix Z as the input of the (m +1) th-level Elman neural network, and simultaneously taking the n column vectors as the input of the (m +1) th-level Elman neural networkv1,v2,…,vnAs the output of the (m +1) th level Elman neural network, and then training by using BP algorithm to obtain the intermediate layer weight coefficient of the (m +1) th level Elman neural networkAnd a threshold valueConnection weight from receiving layer to intermediate layerAnd a threshold valueAnd output layer weight coefficientsAnd a threshold valueWhereinIs represented by (h)m+J)×hm+1The matrix of real numbers of the dimension is,represents hm+1A real number vector of x 1 dimension;
step (5.7): judging whether the condition M +1 is more than M; if yes, returning to the step (5.3) after setting m to m + 1; and if not, finishing the training of the improved Elman neural network model.
Compared with the prior art, the method has the advantages that when the prediction model is established, the power load data needing to be predicted is actively amplified, so that the predicted power load can completely meet the actual power load demand. Secondly, the method adopts a form of connecting a plurality of levels of Elman neural networks in series, and each level of Elman neural networks all use the original input data, thereby avoiding the problem of information loss during the step-by-step extraction process of the characteristics. Finally, in the following specific embodiment, the reliability and feasibility of the method are verified through comparison of the power load prediction result and the actual power load demand.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of power load data over time;
FIG. 3 is a schematic diagram of the structure of the improved Elman neural network of the method of the present invention;
FIG. 4 is a graph of predicted future power load results for a future day.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
A power load prediction method based on an improved Elman neural network comprises the following steps:
step 1, determining the number of measured power load data of each day in a power supply area, specifically D, and using 365 × D power load data z of N of the previous 365 days in the power supply area1,z2,…,zNRespectively establishing an input data matrix X epsilon Rn×mAnd the output data matrix Y ∈ Rn×DThe specific construction method is as follows:
wherein R isn×JAnd Rn×DReal number matrixes respectively representing dimensions nxj and dimensions nxd, wherein N is N- (k +1) D +1, k is input accumulation days and is required to be more than or equal to 3, kD is kxd and represents a product of k and D, (k +1) D is (k +1) xd represents a product of k +1 and D, and J is kD;
step 2, after updating the output data matrix Y according to the formula Y × Y δ, J column vectors X in the input data matrix X are updated according to the formula shown below1,x2,…,xJAnd D column vectors Y in the output data matrix Y1,y2,…,yDRespectively performingPerforming normalization treatment:
wherein, δ > 1 represents an amplification factor,representing the j-th column vector, x, after normalizationj(min) and xj(max) respectively represent column vectors xjThe minimum and maximum values in, J ∈ {1,2, …, J }, y }d(min) and yd(max) represents the column vector y, respectivelydThe minimum value and the maximum value of (d),representing the D column vector after normalization processing, wherein D belongs to {1,2, …, D };
step 3, mixingForm an input matrixAnd will beBuilding an output matrixThen, use u1,u2,…,unRepresenting an input matrixN column vectors of, using v1,v2,…,vnRepresenting an output matrixThe n column vectors of (1), wherein the superscript T represents the transpose of the matrix or vector;
step 4, building oneAn improved Elman neural network model composed of M-level Elman neural networks in series connection is adopted, and the transfer function of the middle-layer neurons is determined to be f (x) 1/(1+ e)-x) The activation function zeta (x) of the neuron in the output layer is a linear function, and the number h of the neuron in the middle layer of each stage Elman neural network1,h2,…,hM(ii) a Wherein x represents a function argument;
step 5, sequentially training the 1 st-level Elman neural network, the 2 nd-level Elman neural network to the M th-level Elman neural network by using a BP algorithm, and reserving a middle layer weight coefficient W of an improved Elman neural network model1,W2,…,WMAnd a threshold value b1,b2,…,bMConnection weight V from receiving layer to intermediate layer1,V2,…,VMAnd a threshold value a1,a2,…,aMAnd output layer weight coefficientsAnd a threshold value
The specific step 5 of training the Elman neural network by using the BP algorithm comprises the following steps: step (5.1): the input layer of the 1 st-level Elman neural network is provided with J neurons, and the middle layer is provided with h neurons1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized bearing layer are any real numbers, the connection weight value and the threshold value from the initialized bearing layer to the middle layer are any real numbers, and the weight coefficient and the threshold value of the initialized middle layer are any real numbers;
step (5.2): by u1,u2,…,unAs input to the 1 st Elman neural network, and v1,v2,…,vnAs the output of the 1 st level Elman neural network, training by using BP algorithm to obtain the intermediate layer weight coefficient of the 1 st level Elman neural networkAnd a threshold valueConnection weight from receiving layer to intermediate layerAnd a threshold valueAnd output layer weight coefficientsAnd a threshold valueAfter that, initializing m to 1;
step (5.3): calculating the output vector g of the m-th-order Elman neural network intermediate layer neuron according to the formula1(m),g2(m),…,gn(m):
In the above formula, i ∈ {1,2, …, n }, and when i ∈ 1, g is set0(m) is a zero vector;
Wherein the content of the first and second substances,is represented by (h)mA real number matrix of + J) x n dimensions;
step (5.5): the input layer of the (m +1) th-level Elman neural network has hm+ J neurons, with h in the middle layerm+1Each neuron, the output layer has D neurons, and the weight coefficient of the intermediate layer is initializedInitializing a weight coefficient of an output layer and a threshold as any real number, and then initializing a connection weight from a receiving layer to a middle layer and the threshold as any real number;
step (5.6): taking n column vectors of the matrix Z as the input of the (m +1) th-level Elman neural network, and simultaneously taking v1,v2,…,vnAs the output of the (m +1) th level Elman neural network, and then training by using BP algorithm to obtain the intermediate layer weight coefficient of the (m +1) th level Elman neural networkAnd a threshold valueConnection weight from receiving layer to intermediate layerAnd a threshold valueAnd output layer weight coefficientsAnd a threshold valueWhereinIs represented by (h)m+J)×hm+1The matrix of real numbers of the dimension is,represents hm+1A real number vector of x 1 dimension;
step (5.7): judging whether the condition M +1 is more than M; if yes, returning to the step (5.3) after setting m to m + 1; if not, finishing the training of the improved Elman neural network model;
step 6, according to the formulaCalculating an output vector y of an m-th-stage Elman neural network output layer1(m),y2(m),…,yN(m) and constructing an output estimation matrix
7, repeating the step 6 until obtaining output estimation matrixes of all levels of Elman neural networksWherein M belongs to {1,2, …, M }, and i belongs to {1,2, …, n };
step 8, building a three-layer neural network model, wherein the number of neurons in an input layer is MD (M × D), the number of neurons in a hidden layer is H, the number of neurons in an output layer is D, and determining that the activation function of the neurons in the hidden layer is phi (x), and the activation function of the neurons in the output layer is zeta (x); wherein x represents a function argument;
step 9, using the matrixN column vectors of (a) as input, with v1,v2,…,vnAs output, the weight coefficient W of the hidden layer neuron is obtained by training the BP algorithm again0∈RMD×HAnd a threshold value b0∈RH×1And weight coefficients for neurons in the output layerAnd a threshold value
Step 10, collecting the latest continuous k days of power load data, and sequentially recording the latest continuous k days of power load data asAnd constructs it into an input data vectorThen, z is normalized according to the following formula to obtain an input vector
In the above equation, z (j) represents the jth element in the input data vector z,representing input vectorsJ ∈ {1,2, …, J };
step 11, withAs an input of the 1 st-stage Elman neural network, an output vector c of the intermediate layer of the 1 st-stage Elman neural network is calculated according to the formula shown belowt(1) And output vectors of the output layerAfter that, reinitializing m 2:
in the above formula, t represents the number of times prediction is performed, ct-1(1) The output vector of the middle layer of the 1 st-stage Elman neural network is shown when the t-1 st prediction is performed, and when the first prediction is performed, i.e. t is 1, the output vector is shownIs a zero vector;
step 12, ct(m-1) andare combined into a column vector ztAfter (m), adding zt(m) as the input of the mth stage Elman neural network, calculating the output vector c of the middle layer of the mth stage Elman neural network according to the formula shown in the specificationt(m) and output vector of output layer
In the above formula, ct-1(m) represents the output vector of the middle layer of the m-th Elman neural network when the prediction is performed for the t-1 st time, and when the prediction is performed for the first time, i.e. t is 1, the output vector is calculatedIs a zero vector;
step 13, judging whether the condition M is less than M; if yes, after m is set to m +1, returning to the step (10.2); if not, M output vectors are obtained
Step 14, mixingAre combined into a column vectorThen according to the formulaComputing the output vector C of hidden neurons0;
Step 15: according to the formulaComputing output vectors for output layer neuronsThen, the following formula is used forRespectively carrying out anti-normalization processing on each element in the power load prediction value to obtain the predicted value y ∈ R of the power load in the next dayD×1:
In the above formula, y (d) represents the d-th element in y,to representThe D element of (D), D ∈ {1,2, …, D };
and step 16, repeating the steps 9 to 14 to predict the power load next time.
The above-mentioned embodiments are only preferred embodiments of the present invention, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
Claims (2)
1. A power load prediction method based on an improved Elman neural network is characterized by comprising the following steps:
step 1, determining that the number of measured power load data of each day in a power supply area is D, and using 365 multiplied by D power load data z of N of the previous 365 days in the power supply area1,z2,…,zNRespectively establishing an input data matrix X epsilon Rn×JAnd the output data matrix Y ∈ Rn×DThe specific construction method is as follows:
wherein R isn×JReal number matrix, R, representing dimension n × Jn×DA matrix of real numbers representing dimensions n × D;
step 2, after updating the output data matrix Y according to the formula Y × Y δ, J column vectors X in the input data matrix X are updated according to the formula shown below1,x2,…,xJAnd D column vectors Y in the output data matrix Y1,y2,…,yDRespectively carrying out normalization treatment:
wherein, δ > 1 represents an amplification factor,representing the j-th column vector, x, after normalizationj(min) and xj(max) respectively represent column vectors xjThe minimum and maximum values in, J ∈ {1,2, …, J }, y }d(min) and yd(max) represents the column vector y, respectivelydThe minimum value and the maximum value of (d),representing the D column vector after normalization processing, wherein D belongs to {1,2, …, D };
step 3, mixingForm an input matrixAnd will beBuilding an output matrixThen, use u1,u2,…,unRepresenting an input matrixN column vectors of, using v1,v2,…,vnRepresenting an output matrixN column vectors of (a);
step 4, building an improved Elman neural network model formed by connecting M-level Elman neural networks in series, and determining the transfer function of the middle-layer neurons as f (x) 1/(1+ e)-x) And the number h of middle layer neurons of each stage of Elman neural network1,h2,…,hM(ii) a Wherein, the activation function ζ (x) of the neurons of the output layer is a linear function, and x represents a function argument;
step 5, sequentially training the 1 st-level Elman neural network, the 2 nd-level Elman neural network to the M th-level Elman neural network by using a BP algorithm, and reserving a middle layer weight coefficient W of an improved Elman neural network model1,W2,…,WMAnd a threshold value b1,b2,…,bMConnection weight V from receiving layer to intermediate layer1,V2,…,VMAnd a threshold value a1,a2,…,aMAnd output layer weight coefficientsAnd a threshold value
Step 6, according to the formulaComputing m-th Elman spiritOutput vector y via the network output layer1(m),y2(m),…,yN(m) and constructing an output estimation matrixAfter that, the air conditioner is started to work,
7, repeating the step 6 until obtaining output estimation matrixes of all levels of Elman neural networksWherein M belongs to {1,2, …, M }, and i belongs to {1,2, …, n };
step 8, building a three-layer neural network model, wherein the number of neurons in an input layer is MD (M multiplied by D), the number of neurons in a hidden layer is H, the number of neurons in an output layer is D, an activation function of the neurons in the hidden layer is phi (x), and x represents a function independent variable;
step 9, using the matrixN column vectors of (a) as input, with v1,v2,…,vnAs output, the weight coefficient W of the hidden layer neuron is obtained by training the BP algorithm again0∈RMD×HAnd a threshold value b0∈RH×1And weight coefficients for neurons in the output layerAnd a threshold value
Step 10, collecting the latest continuous k days of power load data, and sequentially recording the latest continuous k days of power load data asAnd constructs it into an input data vectorThen, z is normalized according to the following formula to obtain an input vector
In the above equation, z (j) represents the jth element in the input data vector z,representing input vectorsJ ∈ {1,2, …, J };
step 11, the input vector obtained in step 10As an input of the 1 st-stage Elman neural network, an output vector c of the intermediate layer of the 1 st-stage Elman neural network is calculated according to the formula shown belowt(1) And output vectors of the output layerAfter that, reinitializing m 2:
in the above formula, t represents the number of times prediction is performed, ct-1(1) The output vector of the middle layer of the 1 st-stage Elman neural network is shown when the t-1 st prediction is performed, and when the first prediction is performed, i.e. t is 1, the output vector is shownIs a zero vector;
step 12, obtaining c by step 11t(m-1) and reacting it withAre combined into a column vector zt(m) and then adding zt(m) as the input of the mth stage Elman neural network, calculating the output vector c of the middle layer of the mth stage Elman neural network according to the formula shown in the specificationt(m) and output vector of output layer
In the above formula, ct-1(m) represents the output vector of the middle layer of the m-th Elman neural network when the prediction is performed for the t-1 st time, and when the prediction is performed for the first time, i.e. t is 1, the output vector is calculatedIs a zero vector;
step 13, judging whether the condition M is less than M in the step 12; if yes, after setting m to m +1, returning to step 12; if not, M output vectors are obtained
Step 14, obtaining the product of step 13Are combined into a column vectorThen according to the formulaComputing the output vector C of hidden neurons0;
Step 15: according to the formulaComputing output vectors for output layer neuronsThen, the following formula is used forRespectively carrying out anti-normalization processing on each element in the power load prediction value to obtain the predicted value y ∈ R of the power load in the next dayD×1:
In the above formula, y (d) represents the d-th element in y,to representThe D element in (D), D ∈ {1,2, …, D };
and step 16, repeating the steps 10 to 15 to predict the next power load.
2. The method for predicting the power load based on the improved Elman neural network as claimed in claim 1, wherein the training step of the step 5 is as follows:
step (5.1): the input layer of the 1 st-level Elman neural network is provided with J neurons, and the middle layer is provided with h neurons1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized bearing layer are any real numbers, the connection weight value and the threshold value from the initialized bearing layer to the middle layer are any real numbers, and the weight coefficient and the threshold value of the initialized middle layer are any real numbers;
step (ii) of(5.2): by u1,u2,…,unAs input to the 1 st Elman neural network, and v1,v2,…,vnAs the output of the 1 st level Elman neural network, training by using BP algorithm to obtain the intermediate layer weight coefficient of the 1 st level Elman neural networkAnd a threshold valueConnection weight from receiving layer to intermediate layerAnd a threshold valueAnd output layer weight coefficientsAnd a threshold valueAfter that, initializing m to 1;
step (5.3): calculating the output vector g of the m-th-order Elman neural network intermediate layer neuron according to the formula1(m),g2(m),…,gn(m):
In the above formula, i ∈ {1,2, …, n }, and when i ∈ 1, g is set0(m) is a zero vector;
Wherein the content of the first and second substances,is represented by (h)mA real number matrix of + J) x n dimensions;
step (5.5): the input layer of the (m +1) th-level Elman neural network has hm+ J neurons, with h in the middle layerm+1D neurons are arranged in an output layer, the weight coefficient and the threshold value of the initialized middle layer are any real numbers, the weight coefficient and the threshold value of the initialized output layer are any real numbers, and then the connection weight value and the threshold value from the receiving layer to the middle layer are initialized to be any real numbers;
step (5.6): taking n column vectors of the matrix Z as the input of the (m +1) th-level Elman neural network, and simultaneously taking v1,v2,…,vnAs the output of the (m +1) th level Elman neural network, and then training by using BP algorithm to obtain the intermediate layer weight coefficient of the (m +1) th level Elman neural networkAnd a threshold valueConnection weight from receiving layer to intermediate layerAnd a threshold valueAnd output layer weight coefficientsAnd a threshold valueWhereinIs represented by (h)m+J)×hm+1The matrix of real numbers of the dimension is,represents hm+1A real number vector of x 1 dimension;
step (5.7): judging whether the condition M +1 is more than M; if yes, returning to the step (5.3) after setting m to m + 1; and if not, finishing the training of the improved Elman neural network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011168134.6A CN112200383B (en) | 2020-10-28 | 2020-10-28 | Power load prediction method based on improved Elman neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011168134.6A CN112200383B (en) | 2020-10-28 | 2020-10-28 | Power load prediction method based on improved Elman neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112200383A true CN112200383A (en) | 2021-01-08 |
CN112200383B CN112200383B (en) | 2024-05-17 |
Family
ID=74011684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011168134.6A Active CN112200383B (en) | 2020-10-28 | 2020-10-28 | Power load prediction method based on improved Elman neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112200383B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113887820A (en) * | 2021-10-20 | 2022-01-04 | 国网浙江省电力有限公司 | Method and device for predicting fault of electric power spot business system, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631517A (en) * | 2015-12-17 | 2016-06-01 | 河海大学 | Photovoltaic power generation power short term prediction method based on mind evolution Elman neural network |
CN106651020A (en) * | 2016-12-16 | 2017-05-10 | 燕山大学 | Short-term power load prediction method based on big data reduction |
CN111028100A (en) * | 2019-11-29 | 2020-04-17 | 南方电网能源发展研究院有限责任公司 | Refined short-term load prediction method, device and medium considering meteorological factors |
CN111428926A (en) * | 2020-03-23 | 2020-07-17 | 国网江苏省电力有限公司镇江供电分公司 | Regional power load prediction method considering meteorological factors |
CN111912875A (en) * | 2020-06-23 | 2020-11-10 | 宁波大学 | Fractionating tower benzene content soft measurement method based on stack type Elman neural network |
-
2020
- 2020-10-28 CN CN202011168134.6A patent/CN112200383B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631517A (en) * | 2015-12-17 | 2016-06-01 | 河海大学 | Photovoltaic power generation power short term prediction method based on mind evolution Elman neural network |
CN106651020A (en) * | 2016-12-16 | 2017-05-10 | 燕山大学 | Short-term power load prediction method based on big data reduction |
CN111028100A (en) * | 2019-11-29 | 2020-04-17 | 南方电网能源发展研究院有限责任公司 | Refined short-term load prediction method, device and medium considering meteorological factors |
CN111428926A (en) * | 2020-03-23 | 2020-07-17 | 国网江苏省电力有限公司镇江供电分公司 | Regional power load prediction method considering meteorological factors |
CN111912875A (en) * | 2020-06-23 | 2020-11-10 | 宁波大学 | Fractionating tower benzene content soft measurement method based on stack type Elman neural network |
Non-Patent Citations (5)
Title |
---|
李志恒等: "基于改进 BP 神经网络的中长期电力负荷预测算法设计", 《自动化与仪器仪表》, no. 10, pages 23 - 25 * |
王泽;曹莉莎;: "基于Elman神经网络的电厂数据预测", 内蒙古科技与经济, no. 03, pages 93 - 94 * |
苏宜靖;谷炜;赵依;董立;蒋琛;于竞哲;: "考虑气象因子的区域电网梅雨期负荷预测", 浙江电力, no. 12 * |
赵铭扬等: "改进的Elman神经网络在短期电力负荷预测中的应用", 宁夏工程技术, no. 02, pages 115 - 117 * |
郭姣姣: "基于改进Elman神经网络的短期电力负荷预测", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》, no. 2015, pages 30 - 42 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113887820A (en) * | 2021-10-20 | 2022-01-04 | 国网浙江省电力有限公司 | Method and device for predicting fault of electric power spot business system, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112200383B (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ullah et al. | Short-term prediction of residential power energy consumption via CNN and multi-layer bi-directional LSTM networks | |
CN112116153B (en) | Park multivariate load joint prediction method coupling Copula and stacked LSTM network | |
CN109840154B (en) | Task dependency-based computing migration method in mobile cloud environment | |
Jian et al. | A cloud edge-based two-level hybrid scheduling learning model in cloud manufacturing | |
Cai et al. | An efficient approach for electric load forecasting using distributed ART (adaptive resonance theory) & HS-ARTMAP (Hyper-spherical ARTMAP network) neural network | |
CN112685657B (en) | Conversation social recommendation method based on multi-mode cross fusion graph network | |
CN116663419A (en) | Sensorless equipment fault prediction method based on optimized Elman neural network | |
CN111325340A (en) | Information network relation prediction method and system | |
CN110796293A (en) | Power load prediction method | |
CN112200383A (en) | Power load prediction method based on improved Elman neural network | |
CN114745725B (en) | Resource allocation management system based on edge computing industrial Internet of things | |
CN114219074A (en) | Wireless communication network resource allocation algorithm dynamically adjusted according to requirements | |
CN114154688A (en) | Short-term power prediction method for photovoltaic power station | |
CN117114776A (en) | Price reporting method for provincial day-ahead spot transaction | |
Zhang et al. | Improving the accuracy of load forecasting for campus buildings based on federated learning | |
CN112101651B (en) | Electric energy network coordination control method, system and information data processing terminal | |
CN116702598A (en) | Training method, device, equipment and storage medium for building achievement prediction model | |
Bashir et al. | Short-term load forecasting using artificial neural network based on particle swarm optimization algorithm | |
Biju et al. | Electric load demand forecasting with RNN cell generated by DARTS | |
CN114611823A (en) | Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park | |
CN113962454A (en) | LSTM energy consumption prediction method based on dual feature selection and particle swarm optimization | |
Ying | Application of support vector regression algorithm in colleges recruiting students prediction | |
CN112100831B (en) | Power grid capacity expansion planning system and method based on relaxation constraint | |
CN115834247B (en) | Edge computing trust evaluation method based on blockchain | |
CN110852480B (en) | Electric power data completion method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |