CN106600059B  Intelligent power grid shortterm load prediction method based on improved RBF neural network  Google Patents
Intelligent power grid shortterm load prediction method based on improved RBF neural network Download PDFInfo
 Publication number
 CN106600059B CN106600059B CN201611148874.7A CN201611148874A CN106600059B CN 106600059 B CN106600059 B CN 106600059B CN 201611148874 A CN201611148874 A CN 201611148874A CN 106600059 B CN106600059 B CN 106600059B
 Authority
 CN
 China
 Prior art keywords
 center
 output
 neural network
 hidden layer
 calculating
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
 230000001537 neural Effects 0.000 title claims abstract description 45
 210000002569 neurons Anatomy 0.000 claims abstract description 40
 238000000034 method Methods 0.000 claims abstract description 7
 239000011159 matrix material Substances 0.000 claims description 27
 230000001186 cumulative Effects 0.000 claims description 9
 238000004364 calculation method Methods 0.000 claims description 3
 230000000875 corresponding Effects 0.000 claims description 3
 238000006243 chemical reaction Methods 0.000 claims description 2
 238000000354 decomposition reaction Methods 0.000 claims description 2
 239000004576 sand Substances 0.000 claims description 2
 238000004422 calculation algorithm Methods 0.000 description 20
 238000000513 principal component analysis Methods 0.000 description 14
 230000001131 transforming Effects 0.000 description 4
 230000000737 periodic Effects 0.000 description 3
 238000006467 substitution reaction Methods 0.000 description 2
 230000004913 activation Effects 0.000 description 1
 238000004458 analytical method Methods 0.000 description 1
 238000007621 cluster analysis Methods 0.000 description 1
 230000002068 genetic Effects 0.000 description 1
 238000010606 normalization Methods 0.000 description 1
 210000004205 output neuron Anatomy 0.000 description 1
 239000002245 particle Substances 0.000 description 1
 230000000717 retained Effects 0.000 description 1
 238000000638 solvent extraction Methods 0.000 description 1
 230000003442 weekly Effects 0.000 description 1
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
 G06Q10/00—Administration; Management
 G06Q10/04—Forecasting or optimisation, e.g. linear programming, "travelling salesman problem" or "cutting stock problem"

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/04—Architectures, e.g. interconnection topology
 G06N3/0481—Nonlinear activation functions, e.g. sigmoids, thresholds

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/08—Learning methods

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
 G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
 G06Q50/06—Electricity, gas or water supply
Abstract
The invention discloses a smart grid shortterm load prediction method based on an improved RBF neural network, relates to the technical field of smart grids, and is used for determining a basis function center and improving the smart grid load prediction precision. The prediction method comprises the following steps: initializing a network; s2, calculating the center c of the basis function_{i}(ii) a S3, according to the center c of the basis function_{i}Calculating the variance ζ_{i}(ii) a S4, according to the center c of the basis function_{i}And the variance ζ_{i}Computing the output R of the hidden layer_{i}(ii) a S5, output R from hidden layer_{i}Calculating the output of the output layer; s6, calculating a prediction error E according to the mean square error and the function; s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network; s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration. The method and the device are used for predicting the load of the power grid.
Description
Technical Field
The invention relates to the technical field of smart grids, in particular to a smart grid shortterm load prediction method based on an improved RBF neural network.
Background
The rapid development of the smart grid generates a large amount of power consumption data (also called sample data), and the analysis of the sample data is of great significance. The sample data is applied to shortterm load prediction by using a prediction method, so that the load prediction precision is improved, and the method plays an important role in safe scheduling and economic operation of the power system. The Radial Basis Function (RBF) neural network is the most widely applied prediction method in load prediction, because it is a local approximation network, can approximate any continuous Function with any precision, has the unique and best approximation characteristic, has no local minimum problem, and has simple topological structure and fast learning rate. The RBF neural network prediction method mainly comprises three parameters which influence the prediction precision and are respectively a basis function center, a basis function radius and a connection weight of a network hidden layer and an output layer. Wherein the connection weight is usually obtained by a gradient descent method. The influence of the base function center and the base function radius on the prediction accuracy is very large, so the existing research mainly focuses on how to determine the base function center and the base function radius of the RBF neural network. In the prior art, the following method is mainly adopted to calculate the center and radius of the basis function:
the first method is to calculate the center of the basis function and the radius of the basis function by using a clustering method (for example, a Kmeans method and an FCM method); the second uses heuristics (e.g., such as genetic methods and particle swarm methods) to compute the basis function centers and the basis function radii. The heuristic method is high in complexity and long in prediction time under the largescale load data of the intelligent power grid, so that the clustering method is more suitable for determining the RBF neural network basis function center and the basis function radius under the largescale load prediction of the intelligent power grid.
In addition, the RBF neural network basis function center is mainly determined by adopting an FCM method, but in the intelligent power grid load prediction, the FCM method is large in load data scale, multiple in dimensionality and complex in method, and finally the intelligent power grid load prediction accuracy is low.
Disclosure of Invention
The invention aims to provide a smart grid shortterm load prediction method based on an improved RBF neural network, which is used for determining a basis function center and improving the smart grid load prediction precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a smart grid shortterm load prediction method based on an improved RBF neural network, which comprises the following steps:
s1, initializing the network;
s2, calculating the center c of the basis function_{i}；
S3, according to the center c of the basis function_{i}Calculating the variance ζ_{i}；
S4, according to the center c of the basis function_{i}And the variance ζ_{i}Computing the output R of the hidden layer_{i}；
S5, output R from hidden layer_{i}Calculating the output of the output layer;
s6, calculating a prediction error E according to the mean square error and the function;
s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network;
s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration.
Step S1 includes: determining input layer neuron number N_{Ⅰ}Number of neurons in hidden layer N_{Ⅱ}Number of neurons in output layer N_{Ⅲ}And initializing a learning rate η and a basis function overlap coefficient η, wherein the number N of neurons in the output layer_{Ⅲ}Number of hidden layer neurons N1_{Ⅱ}I.e. the number of the centers of the basis functions.
Step S2 includes:
s21, S21, input fuzzy index m, iteration stop threshold value, PCA cumulative contribution rate factor and number N of basis function centers_{Ⅱ}Clustered attribute weight ω_{n}Original data X; (ii) a Wherein, the sample data X ═ { X ═ X_{1},x_{2},…,x_{N}N is the number of sample points, x_{j}For each sample point, x_{j}＝{x_{j1},x_{j2},…,x_{js}J is 1,2, … N, s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point, sample data X is divided into K classes in the PCAWFCM method, and the clustering center V is { V ═ V }_{1},v_{2},…,v_{k}K is more than or equal to 2 and less than or equal to N.
S22, PCA attribute dimension reduction is carried out on the sample data XAccording to the following formula, the dimension L after dimension reduction of each sample point is calculatedAll dimensions before dimension L are retained as cluster attributes, where S represents the original dimension of each sample point, λ_{n}Representing the eigenvalue of covariance matrix in PCA, representing PCA cumulative contribution factor; calculating the attribute weight of each cluster according to the following formula:n is 1,2, …, L, and weight omega is weighted according to the property of the cluster_{n}Obtaining a weight vector W ═ { omega ═ of the clustering attributes_{n}And reduced dimension sample data X_{new}；。
S23, initializing a membership matrix U: let u be equal to or less than 0_{ij}≤1,U＝{u_{ij}In the formula, u_{ij}And K represents the number of clustering centers V.
S24, according to the membership matrix U and the sample data X after dimensionality reduction_{new}Calculating a clustering center V:V＝{v_{i}where m is the fuzzy index, x_{j}Representing the jth sample point.
S25, according to the sample data X after dimension reduction_{new}And iteratively calculating a membership matrix U:U＝{u_{ij}in the formula, m is a fuzzy index and represents the fuzzy degree of the membership degree matrix U, the larger the value of m is, the higher the fuzzy degree of the membership degree matrix U is, let m be 2, K represent the number of clustering centers V, d_{ij}Represents each sample point x_{j}To the center of the cluster v_{i}The weighted Euclidean distance of (c) is calculated by the following formula, L represents eachDimension, omega, of sample points after dimensionality reduction_{n}The attribute weight of the cluster is represented.
S26, calculating an objective function J according to the membership matrix U and the clustering center V:wherein m is a blur index, d_{ij}Represents each sample point x_{j}To the center of the cluster v_{i}K represents the number of clustering centers V, and N represents the number of sample points.
S27, judging the target function J: if J^{(t)}J^{(t1)}<Then the cluster center V, i.e., the basis function center c, is output_{i}(ii) a Otherwise, the process returns to step S24 until the formula  J is satisfied^{(t)}J^{(t1)}<Stopping iterative calculation and calculating a clustering center V; where, the iteration stop threshold is represented, and t represents the number of iterations.
Step S3 includes: calculating the radius ζ of hidden layer neurons according to the following formula_{i}：ζ_{i}＝λmin_{j}‖c_{i}c_{j}‖，i,j＝1,2,…N_{Ⅱ}(ii) a In the formula, c_{i}Representing the center of the basis function, N_{Ⅱ}Representing the number of hidden layer neurons, η represents the basis function overlap coefficients.
Step S4 includes: according to the sample data X after dimensionality reduction_{new}Center of basis function c_{i}And the variance ζ_{i}And obtaining the output of the hidden layer:i＝1,2,…N_{Ⅱ}in the formula, N_{Ⅱ}Representing the number of hidden layer neurons.
Step S5 includes: according to the sample data X after dimensionality reduction_{new}And obtaining the output of the output layer:in the formula, N_{Ⅱ}Representing the number of hidden layer neurons; w represents the connection weight of the ith hidden layer neuron to the output layer neuron, R_{i}Output representing hidden layer。
Step S6 includes: the prediction error E is calculated using the mean square error sum function: let a set of input vectors { x_{j}J1, 2 … O and corresponding output value y_{j}J1, 2, … O as training samples,
wherein O is the number of samples and the prediction error
Step S7 includes: the connection weight W is updated according to the following formula:
where η denotes the learning rate, E denotes the prediction error, and q denotes the number of updates.
The clustering weight Z is calculated as follows:
the covariance matrix C is calculated by mapping an Sdimensional space data into Ldimensional subspace, wherein L<<S, let X ═ { X ═ X_{n}Is zero mean data, i.e.Wherein N is 1,2, …, N,t is a rank conversion symbol; and (3) carrying out eigenvalue decomposition on the covariance matrix C:data x of one S dimension_{i}Projecting in the direction of main component L D, i.e. YXQ_{L}(ii) a Let the characteristic root λ of the covariance matrix C_{1}≥λ_{2}≥…≥λ_{S}And is andas the contribution rate of the lth principal component,the cumulative contribution rate of the first L principal components, and the second after dimensionality reductionClustering attribute weight of k attributes:1,2 …, L, and a feature vector set Q ═ Q_{1},q_{2},…,q_{S}]Characteristic value "l ═ diag (λ)_{1},λ_{2},…,λ_{S}) The cumulative contribution rate is made to be greater than 95%.
The method comprises the steps that a PCAWFCM is adopted to determine the RBF basis function center, load data are subjected to PCA dimension reduction processing to obtain less irrelevant prediction input, so that the overlap of the RBF basis functions is reduced, the basis function center is better determined, and the complexity of a prediction algorithm is reduced; in addition, weighted FCM basis function clustering is adopted, and the variance contribution rates of different attributes obtained after PCA processing are weighted for the attributes after dimensionality reduction, so that clustering accuracy is improved, a more accurate basis function center is obtained, and load prediction accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a basic structure of an RBF neural network according to an embodiment of the present invention;
fig. 2 is a flowchart of a shortterm load prediction method of a smart grid based on an improved RBF neural network in the embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a weighted FCM clustering algorithm based on PCA dimension reduction, which is called PCAFCM in a combined mode to obtain a shortterm load prediction result of a smart power grid.
In addition, the iterative calculation involved in the invention refers to successive approximation, a rough approximate value is taken firstly, and then the initial value is repeatedly corrected by using the same or a plurality of formulas until the preset precision requirement is met. For example, the iterations involved in the present invention establish a trained model of the prediction error E, involving the output of the hidden layer, the output of the output layer, updates to the weighting values, and so forth.
Implicit layer center c in the invention_{i}Also known as the basis function center c_{i}(ii) a Radius ζ of hidden layer neurons_{i}Also known as variance ζ_{i}。
Example one
The embodiment provides a smart grid shortterm load prediction method based on an improved RBF neural network. The method adopts PCAWFCM clustering algorithm to determine RBF basis function center c_{i}And determining the connection weight of the hidden layer and the output layer of the RBF neural network by adopting a gradient descent method. The method for predicting the shortterm load of the smart grid based on the improved RBF neural network is described in detail as follows:
firstly, establishing sample data X ═ X_{1},x_{2},…,x_{N}}，x_{j}For each sample point, x_{j}＝{x_{j1},x_{j2},…,x_{js}1,2, … N for a total of N sample points, x_{j}Representing that each sample point contains s attributes, dividing sample data X into K classes in cluster analysis, wherein K is more than or equal to 2 and less than or equal to N; cluster center V ═ V_{1},v_{2},…,v_{k}And K cluster centers.
The Radial Basis Function (RBF) neural network is a feedforward neural network based on a function approximation theory, and compared with a BP (Back propagation) neural network, the Radial Basis Function (RBF) neural network has the advantages of good function approximation characteristic, simple structure, high training speed and the like. As shown in fig. 1, the RBF neural network is a threelayer neural network composed of an input layer 1, a hidden layer 2, and an output layer 3, and its structure is shown in the following figure.
The core idea of the RBF neural network is to combine radial basis functionsThe hidden layer space is formed as the base of the hidden layer unit, and the input vector is transformed in the hidden layer, so that the linearly inseparable data in the lowdimensional space is linearly separable in the highdimensional space. The transformation of the input layer to the hidden layer is a nonlinear transformation, while the hidden layer to the output layer is a linear transformation. The hidden layer transformation function is a radial basis function which is a nonnegative nonlinear function with a radially symmetrical local distribution center. The connection weight between the input layer and the hidden layer of the RBF neural network is 1, the hidden layer completes parameter adjustment of the activation function, and the output layer adjusts the connection weight. In the RBF neural network, there are 3 parameters to be solved: center of basis function c_{i}Width of the hidden layer, connection weight of the hidden layer to the output layer. The gaussian function is a commonly used basis function in RBF neural networks, so the output of the hidden layer neurons is:
wherein, c_{i}Is the center of the hidden layer, i.e. the center of the ith Gaussian function, N_{Ⅱ}Number of neurons in the cryptic layer, σ_{i}Is the radius of the RBF hidden layer neuron, which can be expressed as:
ζ_{i}＝τmin_{j}‖c_{i}c_{j}‖，i,j＝1,2,…N_{Ⅱ}； (2)
wherein, c_{i}Representing the hidden layer center, N_{Ⅱ}Representing the number of hidden layer neurons, η represents the basis function overlap coefficients.
The output of the RBF neural network is a linear combination of the outputs of all hidden layer neurons, and can be expressed as:
where W is the connection weight of the jth hidden layer neuron to the output neuron, R_{i}Representing the output of the hidden layer.
In the RBF neural network nonlinear approximation process, after a training sample is given, the algorithm needs to solve the following two key questionsTitle: 1) determining the network structure, i.e. determining the basis function center c of the RBF neural network_{i}(ii) a 2) Adjusting the connection weight omega of the hidden layer and the output layer_{n}. The selection of these parameters will affect the prediction performance of the RBF neural network, so before prediction, we need to select the optimal ω_{n}And c_{i}The prediction performance of the RBF neural network is improved.
Connection weight omega of hidden layer and output layer_{n}Training is generally performed by a gradient descent method. A set of input vectors x_{j}J1, 2 … O and corresponding output value y_{j}J ═ 1,2, … O } as training samples, where K is the number of samples. The summeansquare error function is then:
to minimize the error function, the connection weight W is:
center of basis function c_{i}And determining by adopting a PCAWFCM clustering method. The PCAWFCM clustering algorithm is described in detail below.
The PCAWFCM clustering algorithm is based on the traditional FCM clustering algorithm, reduces algorithm complexity by performing attribute dimension reduction on sample data, and further improves clustering performance by performing weighted FCM clustering by using each attribute variance contribution rate after dimension reduction as an attribute weight. The algorithm comprises two steps, wherein PCA dimensionality reduction is firstly carried out, then weighted FCM clustering is carried out, and each step is respectively introduced below.
Principal Component Analysis (PCA) is a linear dimensionality reduction algorithm that can map an Sdimensional spatial data into Ldimensional subspace, where L < S. mathematical calculations require computation of eigenvectors of the covariance matrix of the original data_{n}N is 1,2, …, N is zero mean data, i.e.The covariance matrix C is defined as:
decomposing by characteristic values to obtain:
wherein Q is [ Q ]_{1},q_{2},…,q_{S}]For the set of characteristic vectors, Λ ═ diag (λ)_{1},λ_{2},…,λ_{S}) For the eigenvalues, the top L eigenvectors U may be utilized_{L}＝[u_{1},u_{2},…,u_{L}]Data x of one S dimension_{i}Projection to L D principal component direction is YXQ_{L}. Let the characteristic root λ of the covariance matrix C_{1}≥λ_{2}≥…≥λ_{S}> 0, definitionAs the contribution rate of the lth principal component,the accumulated contribution rate of the first L principal components is generally more than 95% to achieve the purpose of dimension reduction and expect small information loss.
The PCAWFCM algorithm is based on the FCM algorithm, which is a fuzzy clustering algorithm, namely a soft partitioning method. Each sample point cannot be strictly classified into a certain class, but belongs to a certain class with a certain degree of membership. Let u_{ij}Representing the membership degree of the jth sample point belonging to the ith class, the membership degree matrix and the clustering center are respectively U ═ U_{ij}V ═ V } and V ═ V_{i}}. The PCAWFCM algorithm takes the importance of the attributes into consideration by giving different weights to the attributes after dimension reduction on the basis of the FCM algorithm. The weight of the attribute adopts the variance contribution rate of each attribute after the PCA reduces the dimension of the original data attribute, and the attribute with larger contribution rate shows that the attribute has larger function in the data set. The weight of the kth attribute after dimensionality reduction:l＝1,2…,L。
the goal of the clustering algorithm is to maximize intraclass similarity and minimize interclass similarity, and the similarity is measured in Euclidean distance. The algorithm therefore determines the cluster center V and the blur matrix U by minimizing an objective function of
Wherein
Wherein d is_{ij}Is a sample x_{j}To the center of the cluster v_{i}Weighted euclidean distance of
In the formula (8), m is more than or equal to 1 and is a fuzzy weighting index, the fuzzy degree of the membership degree matrix U is represented, the higher m is, the higher the classified fuzzy degree is, usually m is 2, L represents the dimensionality of each sample point after dimensionality reduction, and Z represents the clustering weight value.
By calculating the differential of (8) and (9), we can obtain u_{ij}And v_{i}Is calculated by the formula
Wherein x_{j}Representing the jth sample.
Example two
According to the idea of the first embodiment, the neural network prediction in this embodiment needs to determine the input and output of the neural network and the number of hidden layer nodes in advance. The network inputs are determined by a series of parameters that affect the predicted values. Because of intelligenceThe load curve of the power grid user has good periodic characteristics, so that the daily periodic characteristics and the weekly periodic characteristics can be considered for influencing the load value at a certain moment, namely, the load value at the same moment on the day before the predicted moment and the load value at the same moment in the week before the predicted moment are selected. Specific prediction of input layer neuron number N_{Ⅰ}The load value of the previous moment of the prediction point L (t1), the load value of the two moments before the prediction point L (t2), the load value of the same prediction point L (t48) on the previous day, the load value of the previous moment of the same prediction point L (t49) on the previous day, the load value of the next moment after the same prediction point L (t47) on the previous day, the load value of the same prediction point L (t48L 07) on the previous week, the load value of the previous moment before the same prediction point L (t48 × 71) on the previous week, the load value of the next moment after the same prediction point L (t48 × 7+1) on the previous week and a day type parameter, namely whether the load value is the end of the week or not are included_{Ⅲ}The load at a certain time is predicted as 1. Number of hidden layer neurons N_{Ⅱ}And determining according to the prediction error minimum. All network inputs are processed by maximum and minimum normalization.
As shown in fig. 2, the smart grid shortterm load prediction method based on the improved RBF neural network includes:
(1) and (5) initializing the network. Determining the number N of network input layer neurons according to the system input and output sequence_{Ⅰ}Number of neurons in hidden layer N_{Ⅱ}Number of neurons in output layer N_{Ⅲ}And initializing the learning rate η and the basis function overlap coefficient η.
(2) Calculating the center c of the RBF basis function_{i}. Determining the center of the basis function by adopting PCAWFCM clustering algorithm, wherein the specific process comprises the following steps:
s21, inputting fuzzy index m, iteration stop threshold value and principal component cumulative contribution rate factor, and clustering number, namely number N of centers of basis functions_{Ⅱ}Connection weight ω_{n}Sample data X;
and S22, performing PCA attribute dimension reduction processing on the sample data. According to the formulaL are obtained, all dimensions before L are reserved as cluster attributes, and the cluster attributes are obtained according toFormula (II)n is 1,2, …, L, initializing each cluster attribute, and obtaining a connection weight vector W of the cluster attributes_{n}And reduced dimension sample data X_{new}(ii) a Wherein, X_{new}＝{x_{1},x_{2},…,x_{g}G is the number of sample points, x_{g}For each sample point, xg ═ x_{g1},x_{g2},…,x_{gs}And s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point.
S23, initializing a membership matrix U according to the formula (9);
s24, calculating a clustering center V according to the formula (12);
s25, calculating a membership matrix U according to the formula (11);
s26, calculating an objective function J according to the formula (8);
s27, if J^{(t)}J^{(t1)}If is, then the cluster center V, i.e., the basis function center c, is obtained_{i}(ii) a Otherwise, return to step S24;
(3) solving the variance ζ according to equation (2)_{i}。
(4) The output of the hidden layer is computed. According to the sample data X after dimensionality reduction_{new}Hidden layer center c_{i}And the variance ζ_{i}The output of the hidden layer is calculated according to equation (1).
(5) The output of the output layer is calculated according to equation (3).
(6) The prediction error E is calculated according to equation (4).
(7) The cluster weight value is updated according to equation (5).
(8) And (4) judging whether the iteration of the algorithm is finished or not, and if not, returning to the step (4).
The method comprises the steps that a PCAWFCM is adopted to determine the RBF basis function center, load data are subjected to PCA dimension reduction processing to obtain less irrelevant prediction input, so that the overlap of the RBF basis functions is reduced, the basis function center is better determined, and the complexity of a prediction algorithm is reduced; in addition, weighted FCM basis function clustering is adopted, and the variance contribution rates of different attributes obtained after PCA processing are weighted for the attributes after dimensionality reduction, so that clustering accuracy is improved, a more accurate basis function center is obtained, and load prediction accuracy is improved.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (7)
1. A shortterm load prediction method of a smart grid based on an improved RBF neural network is characterized by comprising the following steps:
s1, initializing the network;
carrying out dimensionality reduction on sample data of the smart power grid; carrying out PCA row weighting FCM (fuzzy C means) basic function clustering on the sample data subjected to PCA dimensionality reduction, wherein when carrying out weighted FCM basic function clustering on the sample data of the smart grid subjected to PCA dimensionality reduction, weighting the attribute clusters subjected to dimensionality reduction by the variance contribution rates of different attributes obtained in the PCA dimensionality reduction to obtain a basic function center c_{i}The method comprises the following steps:
s2, calculating the center c of the basis function_{i}；
S3, according to the center c of the basis function_{i}Calculating the variance ζ_{i}；
The clustering weight value is calculated as follows:
the covariance matrix C is calculated by mapping an Sdimensional space data into Ldimensional subspace, wherein L<<S, let X ═ { X ═ X_{n}Is zero mean data, i.e.Wherein N is 1,2, …, N,t is a rank conversion symbol;
and (3) carrying out eigenvalue decomposition on the covariance matrix C:
data x of one S dimension_{i}Projecting in the direction of main component L D, i.e. YXQ_{L}；
Let the characteristic root λ of the covariance matrix C_{1}≥λ_{2}≥…≥λ_{S}And is andas the contribution rate of the lth principal component,cumulative contribution rate for the first L principal components;
the cluster attribute weight of the kth attribute after dimensionality reduction:
set of eigenvectors Q ═ Q_{1},q_{2},…,q_{S}]Characteristic value "l ═ diag (λ)_{1},λ_{2},…,λ_{S}) Making the cumulative contribution rate greater than 95%;
according to the obtained basis function center c_{i}And variance ζ_{i}Performing RBF neural network prediction on the sample data after the weighted FCM basis function clustering to obtain output layer output, which is specifically as follows:
s4, according to the center c of the basis function_{i}And the variance ζ_{i}Computing the output R of the hidden layer_{i}；
Step S4 includes:
according to the sample data X after dimensionality reduction_{new}Center of basis function c_{i}And the variance ζ_{i}And obtaining the output of the hidden layer:
in the formula, N_{Ⅱ}Representing the number of hidden layer neurons;
s5, according to implicationOutput of layer R_{i}Calculating the output of the output layer so as to obtain a shortterm load prediction result of the intelligent power grid;
s6, calculating a prediction error E according to the mean square error and the function;
s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network;
s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration.
2. The smart grid shortterm load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S1 comprises:
determining input layer neuron number N_{Ⅰ}Number of neurons in hidden layer N_{Ⅱ}Number of neurons in output layer N_{Ⅲ}And initializing a learning rate η and a basis function overlap factor tau, wherein the number of neurons N of the output layer_{Ⅲ}Number of hidden layer neurons N1_{Ⅱ}I.e. the number of the centers of the basis functions.
3. The smart grid shortterm load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S2 comprises:
s21, inputting fuzzy index m, iteration stop threshold value, PCA cumulative contribution rate factor and number N of basis function centers_{Ⅱ}Clustered attribute weight ω_{n}Original data X;
wherein, the sample data X ═ { X ═ X_{1}，x_{2}，…，x_{N}N is the number of sample points, x_{j}For each sample point, x_{j}＝{x_{j1}，x_{j2}，…，x_{js}J is 1,2, … N, s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point, sample data X is divided into K classes in the PCAWFCM method, and the clustering center V is { V ═ V }_{1}，v_{2}，…，v_{k}K is more than or equal to 2 and less than or equal to N;
s22, conducting PCA attribute dimension reduction processing on the sample data X, and calculating the dimension L of each sample point after dimension reduction according to the following formula:
retain all dimensions before dimension L as cluster attributes;
where S represents the original dimension of each sample point, λ_{n}Representing the variance contribution rate in the PCA and representing the PCA cumulative contribution rate factor as a characteristic value;
calculating the attribute weight of each cluster according to the following formula:
weighting omega according to clustered attributes_{n}Obtaining a weight vector W ═ { omega ═ of the clustering attributes_{n}And reduced dimension sample data X_{new}；
S23, initializing a membership matrix U:
let u be equal to or less than 0_{ij}≤1，U＝{u_{ij}}；
In the formula u_{ij}Representing the membership degree of the jth sample point belonging to the ith class, and K representing the number of clustering centers V;
s24, according to the membership matrix U and the sample data X after dimensionality reduction_{new}Calculating a clustering center V:
V＝{v_{i}}；
wherein m is a blur index, x_{j}Represents the jth sample point;
s25, according to the sample data X after dimension reduction_{new}And iteratively calculating a membership matrix U:
U＝{u_{ij}}；
wherein m is a fuzzy index and represents the fuzzy degree of the membership matrix U, the higher the value of m is, the higher the fuzzy degree of the membership matrix U is, let m be 2, K represents the number of the clustering centers V, d_{ij}Represents each sample point x_{j}To the center of the cluster v_{i}The weighted euclidean distance of (a) is calculated by the following formula:
l denotes the dimension, ω, of each sample point after dimensionality reduction_{n}Representing the attribute weight of the cluster;
s26, calculating an objective function J according to the membership matrix U and the clustering center V:
wherein m is a blur index, d_{ij}Represents each sample point x_{j}To the center of the cluster v_{i}K represents the number of clustering centers V, and N represents the number of sample points;
s27, judging the target function J: if J^{(t)}J^{(t1)}If is, then the cluster center V, i.e., the basis function center c, is output_{i}(ii) a Otherwise, the process returns to step S24 until the formula  J is satisfied^{(t)}J^{(t1)}If the value is less than the threshold value, stopping iterative calculation, and calculating a clustering center V, wherein the value represents an iteration stop threshold value, and t represents iteration times.
4. The smart grid shortterm load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S3 comprises:
calculating the radius σ of the hidden layer neurons according to the following formula_{i}：
σ_{i}＝τmin_{j}c_{i}c_{j}，i,j＝1,2,…N_{Ⅱ}；
In the formula, c_{i}Representing the center of the basis function, N_{Ⅱ}Representing the number of hidden layer neurons and tau representing the basis function overlap coefficient.
5. The smart grid shortterm load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S5 comprises:
according to the sample data X after dimensionality reduction_{new}And obtaining the output of the output layer:
in the formula, N_{Ⅱ}Representing the number of hidden layer neurons; w represents the connection weight of the ith hidden layer neuron to the output layer neuron, R_{i}Representing the output of the hidden layer.
6. The smart grid shortterm load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S6 comprises:
the prediction error E is calculated using the mean square error sum function:
let a set of input vectors { x_{j}J1, 2 … O and corresponding output value y_{j}J1, 2, … O as training samples,
wherein O is the number of samples and the prediction error
7. The smart grid shortterm load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S7 comprises:
the connection weight W is updated according to the following formula:
where η denotes the learning rate, E denotes the prediction error, and q denotes the number of updates.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201611148874.7A CN106600059B (en)  20161213  20161213  Intelligent power grid shortterm load prediction method based on improved RBF neural network 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201611148874.7A CN106600059B (en)  20161213  20161213  Intelligent power grid shortterm load prediction method based on improved RBF neural network 
Publications (2)
Publication Number  Publication Date 

CN106600059A CN106600059A (en)  20170426 
CN106600059B true CN106600059B (en)  20200724 
Family
ID=58802260
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201611148874.7A Active CN106600059B (en)  20161213  20161213  Intelligent power grid shortterm load prediction method based on improved RBF neural network 
Country Status (1)
Country  Link 

CN (1)  CN106600059B (en) 
Families Citing this family (11)
Publication number  Priority date  Publication date  Assignee  Title 

CN107330518A (en) *  20170621  20171107  国家电网公司  Energy management control method and system based on temperature adjustment load prediction 
CN107403188A (en) *  20170628  20171128  中国农业大学  A kind of quality evaluation method and device 
CN107194524B (en) *  20170728  20200522  合肥工业大学  RBF neural networkbased coal and gas outburst prediction method 
CN108229754A (en) *  20180131  20180629  杭州电子科技大学  Shortterm load forecasting method based on similar day segmentation and LMBP networks 
CN108230121A (en) *  20180209  20180629  艾凯克斯（嘉兴）信息科技有限公司  A kind of product design method based on Recognition with Recurrent Neural Network 
CN108680358A (en) *  20180323  20181019  河海大学  A kind of Wind turbines failure prediction method based on bearing temperature model 
CN108631817B (en) *  20180510  20200519  东北大学  Method for predicting frequency hopping signal frequency band based on timefrequency analysis and radial neural network 
CN109179133A (en) *  20181105  20190111  常熟理工学院  For prejudging the elevator intelligent maintenance prediction technique and system of failure 
CN109284876A (en) *  20181119  20190129  福州大学  Based on PCARBF Buried Pipeline rate prediction method 
CN110059824A (en) *  20190522  20190726  杭州电子科技大学  A kind of neural net prediction method based on principal component analysis 
CN110365647A (en) *  20190613  20191022  广东工业大学  A kind of false data detection method for injection attack based on PCA and BP neural network 
Citations (7)
Publication number  Priority date  Publication date  Assignee  Title 

CN102982393A (en) *  20121109  20130320  山东电力集团公司聊城供电公司  Online prediction method of electric transmission line dynamic capacity 
CN103095494A (en) *  20121231  20130508  北京邮电大学  Risk evaluation method of electric power communication network 
CN103136598A (en) *  20130226  20130605  福建省电力有限公司  Monthly electrical load computer forecasting method based on wavelet analysis 
CN103646354A (en) *  20131128  20140319  国家电网公司  Effective index FCM and RBF neural networkbased substation load characteristic categorization method 
CN105678404A (en) *  20151230  20160615  东北大学  Microgrid load prediction system and method based on electricity purchased online and dynamic correlation factor 
CN105787584A (en) *  20160128  20160720  华北电力大学（保定）  Wind turbine malfunction early warning method based on cloud platform 
EP3098762A1 (en) *  20150529  20161130  Samsung Electronics Co., Ltd.  Dataoptimized neural network traversal 

2016
 20161213 CN CN201611148874.7A patent/CN106600059B/en active Active
Patent Citations (7)
Publication number  Priority date  Publication date  Assignee  Title 

CN102982393A (en) *  20121109  20130320  山东电力集团公司聊城供电公司  Online prediction method of electric transmission line dynamic capacity 
CN103095494A (en) *  20121231  20130508  北京邮电大学  Risk evaluation method of electric power communication network 
CN103136598A (en) *  20130226  20130605  福建省电力有限公司  Monthly electrical load computer forecasting method based on wavelet analysis 
CN103646354A (en) *  20131128  20140319  国家电网公司  Effective index FCM and RBF neural networkbased substation load characteristic categorization method 
EP3098762A1 (en) *  20150529  20161130  Samsung Electronics Co., Ltd.  Dataoptimized neural network traversal 
CN105678404A (en) *  20151230  20160615  东北大学  Microgrid load prediction system and method based on electricity purchased online and dynamic correlation factor 
CN105787584A (en) *  20160128  20160720  华北电力大学（保定）  Wind turbine malfunction early warning method based on cloud platform 
Also Published As
Publication number  Publication date 

CN106600059A (en)  20170426 
Similar Documents
Publication  Publication Date  Title 

Chang et al.  An improved neural networkbased approach for shortterm wind speed and power forecast  
Zhang et al.  Shortterm wind speed forecasting using empirical mode decomposition and feature selection  
Li et al.  Applying various algorithms for species distribution modelling  
CN103049792B (en)  Deepneuralnetwork distinguish pretraining  
Stoyanov et al.  Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure  
Nowlan  Maximum likelihood competitive learning  
Kang et al.  A weightincorporated similaritybased clustering ensemble method based on swarm intelligence  
Poloni et al.  Hybridization of a multiobjective genetic algorithm, a neural network and a classical optimizer for a complex design problem in fluid dynamics  
Pai  System reliability forecasting by support vector machines with genetic algorithms  
Angelov et al.  A new type of simplified fuzzy rulebased system  
Juang et al.  A locally recurrent fuzzy neural network with support vector regression for dynamicsystem modeling  
Anastasakis et al.  The development of selforganization techniques in modelling: a review of the group method of data handling (GMDH)  
Chen et al.  Particle swarm optimization aided orthogonal forward regression for unified data modeling  
Moreno et al.  Wind speed forecasting approach based on singular spectrum analysis and adaptive neuro fuzzy inference system  
CN101414366B (en)  Method for forecasting electric power system shortterm load based on method for improving uttermost learning machine  
Pan et al.  A comparison of neural network backpropagation algorithms for electricity load forecasting  
US7340440B2 (en)  Hybrid neural network generation system and method  
Abdoos et al.  Short term load forecasting using a hybrid intelligent method  
Abraham et al.  Modeling chaotic behavior of stock indices using intelligent paradigms  
CN102622418B (en)  Prediction device and equipment based on BP (Back Propagation) nerve network  
CN105488528B (en)  Neural network image classification method based on improving expert inquiry method  
CN104361393A (en)  Method for using improved neural network model based on particle swarm optimization for data prediction  
US20130138589A1 (en)  Exploiting sparseness in training deep neural networks  
Li et al.  Deep reinforcement learning: Framework, applications, and embedded implementations  
KavousiFard  Modeling uncertainty in tidal current forecast using prediction intervalbased SVR 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
GR01  Patent grant  
GR01  Patent grant 