CN106600059B - Intelligent power grid short-term load prediction method based on improved RBF neural network - Google Patents
Intelligent power grid short-term load prediction method based on improved RBF neural network Download PDFInfo
- Publication number
- CN106600059B CN106600059B CN201611148874.7A CN201611148874A CN106600059B CN 106600059 B CN106600059 B CN 106600059B CN 201611148874 A CN201611148874 A CN 201611148874A CN 106600059 B CN106600059 B CN 106600059B
- Authority
- CN
- China
- Prior art keywords
- center
- output
- neural network
- basis function
- hidden layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 45
- 210000002569 neuron Anatomy 0.000 claims abstract description 39
- 230000008569 process Effects 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 79
- 239000011159 matrix material Substances 0.000 claims description 27
- 230000009467 reduction Effects 0.000 claims description 24
- 230000001186 cumulative effect Effects 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 239000004576 sand Substances 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 20
- 238000000513 principal component analysis Methods 0.000 description 14
- 230000009466 transformation Effects 0.000 description 4
- 238000011478 gradient descent method Methods 0.000 description 3
- 230000000737 periodic effect Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003442 weekly effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Software Systems (AREA)
- Tourism & Hospitality (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Business, Economics & Management (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Marketing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Primary Health Care (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a smart grid short-term load prediction method based on an improved RBF neural network, relates to the technical field of smart grids, and is used for determining a basis function center and improving the smart grid load prediction precision. The prediction method comprises the following steps: initializing a network; s2, calculating the center c of the basis functioni(ii) a S3, according to the center c of the basis functioniCalculating the variance ζi(ii) a S4, according to the center c of the basis functioniAnd the variance ζiComputing the output R of the hidden layeri(ii) a S5, output R from hidden layeriCalculating the output of the output layer; s6, calculating a prediction error E according to the mean square error and the function; s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network; s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration. The method and the device are used for predicting the load of the power grid.
Description
Technical Field
The invention relates to the technical field of smart grids, in particular to a smart grid short-term load prediction method based on an improved RBF neural network.
Background
The rapid development of the smart grid generates a large amount of power consumption data (also called sample data), and the analysis of the sample data is of great significance. The sample data is applied to short-term load prediction by using a prediction method, so that the load prediction precision is improved, and the method plays an important role in safe scheduling and economic operation of the power system. The Radial Basis Function (RBF) neural network is the most widely applied prediction method in load prediction, because it is a local approximation network, can approximate any continuous Function with any precision, has the unique and best approximation characteristic, has no local minimum problem, and has simple topological structure and fast learning rate. The RBF neural network prediction method mainly comprises three parameters which influence the prediction precision and are respectively a basis function center, a basis function radius and a connection weight of a network hidden layer and an output layer. Wherein the connection weight is usually obtained by a gradient descent method. The influence of the base function center and the base function radius on the prediction accuracy is very large, so the existing research mainly focuses on how to determine the base function center and the base function radius of the RBF neural network. In the prior art, the following method is mainly adopted to calculate the center and radius of the basis function:
the first method is to calculate the center of the basis function and the radius of the basis function by using a clustering method (for example, a K-means method and an FCM method); the second uses heuristics (e.g., such as genetic methods and particle swarm methods) to compute the basis function centers and the basis function radii. The heuristic method is high in complexity and long in prediction time under the large-scale load data of the intelligent power grid, so that the clustering method is more suitable for determining the RBF neural network basis function center and the basis function radius under the large-scale load prediction of the intelligent power grid.
In addition, the RBF neural network basis function center is mainly determined by adopting an FCM method, but in the intelligent power grid load prediction, the FCM method is large in load data scale, multiple in dimensionality and complex in method, and finally the intelligent power grid load prediction accuracy is low.
Disclosure of Invention
The invention aims to provide a smart grid short-term load prediction method based on an improved RBF neural network, which is used for determining a basis function center and improving the smart grid load prediction precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a smart grid short-term load prediction method based on an improved RBF neural network, which comprises the following steps:
s1, initializing the network;
s2, calculating the center c of the basis functioni;
S3, according to the center c of the basis functioniCalculating the variance ζi;
S4, according to the center c of the basis functioniAnd the variance ζiComputing the output R of the hidden layeri;
S5, output R from hidden layeriCalculating the output of the output layer;
s6, calculating a prediction error E according to the mean square error and the function;
s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network;
s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration.
Step S1 includes: determining input layer neuron number NⅠNumber of neurons in hidden layer NⅡNumber of neurons in output layer NⅢAnd initializing a learning rate η and a basis function overlap coefficient η, wherein the number N of neurons in the output layerⅢNumber of hidden layer neurons N1ⅡI.e. the number of the centers of the basis functions.
Step S2 includes:
s21, S21, input fuzzy index m, iteration stop threshold value, PCA cumulative contribution rate factor and number N of basis function centersⅡClustered attribute weight ωnOriginal data X; (ii) a Wherein, the sample data X ═ { X ═ X1,x2,…,xNN is the number of sample points, xjFor each sample point, xj={xj1,xj2,…,xjsJ is 1,2, … N, s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point, sample data X is divided into K classes in the PCA-WFCM method, and the clustering center V is { V ═ V }1,v2,…,vkK is more than or equal to 2 and less than or equal to N.
S22, PCA attribute dimension reduction is carried out on the sample data XAccording to the following formula, the dimension L after dimension reduction of each sample point is calculatedAll dimensions before dimension L are retained as cluster attributes, where S represents the original dimension of each sample point, λnRepresenting the eigenvalue of covariance matrix in PCA, representing PCA cumulative contribution factor; calculating the attribute weight of each cluster according to the following formula:n is 1,2, …, L, and weight omega is weighted according to the property of the clusternObtaining a weight vector W ═ { omega ═ of the clustering attributesnAnd reduced dimension sample data Xnew;。
S23, initializing a membership matrix U: let u be equal to or less than 0ij≤1,U={uijIn the formula, uijAnd K represents the number of clustering centers V.
S24, according to the membership matrix U and the sample data X after dimensionality reductionnewCalculating a clustering center V:V={viwhere m is the fuzzy index, xjRepresenting the jth sample point.
S25, according to the sample data X after dimension reductionnewAnd iteratively calculating a membership matrix U:U={uijin the formula, m is a fuzzy index and represents the fuzzy degree of the membership degree matrix U, the larger the value of m is, the higher the fuzzy degree of the membership degree matrix U is, let m be 2, K represent the number of clustering centers V, dijRepresents each sample point xjTo the center of the cluster viThe weighted Euclidean distance of (c) is calculated by the following formula, L represents eachDimension, omega, of sample points after dimensionality reductionnThe attribute weight of the cluster is represented.
S26, calculating an objective function J according to the membership matrix U and the clustering center V:wherein m is a blur index, dijRepresents each sample point xjTo the center of the cluster viK represents the number of clustering centers V, and N represents the number of sample points.
S27, judging the target function J: if J(t)-J(t-1)|<Then the cluster center V, i.e., the basis function center c, is outputi(ii) a Otherwise, the process returns to step S24 until the formula | J is satisfied(t)-J(t-1)|<Stopping iterative calculation and calculating a clustering center V; where, the iteration stop threshold is represented, and t represents the number of iterations.
Step S3 includes: calculating the radius ζ of hidden layer neurons according to the following formulai:ζi=λminj‖ci-cj‖,i,j=1,2,…NⅡ(ii) a In the formula, ciRepresenting the center of the basis function, NⅡRepresenting the number of hidden layer neurons, η represents the basis function overlap coefficients.
Step S4 includes: according to the sample data X after dimensionality reductionnewCenter of basis function ciAnd the variance ζiAnd obtaining the output of the hidden layer:i=1,2,…NⅡin the formula, NⅡRepresenting the number of hidden layer neurons.
Step S5 includes: according to the sample data X after dimensionality reductionnewAnd obtaining the output of the output layer:in the formula, NⅡRepresenting the number of hidden layer neurons; w represents the connection weight of the ith hidden layer neuron to the output layer neuron, RiOutput representing hidden layer。
Step S6 includes: the prediction error E is calculated using the mean square error sum function: let a set of input vectors { xjJ-1, 2 … O and corresponding output value yjJ-1, 2, … O as training samples,
where η denotes the learning rate, E denotes the prediction error, and q denotes the number of updates.
The clustering weight Z is calculated as follows:
the covariance matrix C is calculated by mapping an S-dimensional space data into L-dimensional subspace, wherein L<<S, let X ═ { X ═ XnIs zero mean data, i.e.Wherein N is 1,2, …, N,t is a rank conversion symbol; and (3) carrying out eigenvalue decomposition on the covariance matrix C:data x of one S dimensioniProjecting in the direction of main component L D, i.e. Y-XQL(ii) a Let the characteristic root λ of the covariance matrix C1≥λ2≥…≥λSAnd is andas the contribution rate of the l-th principal component,the cumulative contribution rate of the first L principal components, and the second after dimensionality reductionClustering attribute weight of k attributes:1,2 …, L, and a feature vector set Q ═ Q1,q2,…,qS]Characteristic value "l ═ diag (λ)1,λ2,…,λS) The cumulative contribution rate is made to be greater than 95%.
The method comprises the steps that a PCA-WFCM is adopted to determine the RBF basis function center, load data are subjected to PCA dimension reduction processing to obtain less irrelevant prediction input, so that the overlap of the RBF basis functions is reduced, the basis function center is better determined, and the complexity of a prediction algorithm is reduced; in addition, weighted FCM basis function clustering is adopted, and the variance contribution rates of different attributes obtained after PCA processing are weighted for the attributes after dimensionality reduction, so that clustering accuracy is improved, a more accurate basis function center is obtained, and load prediction accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a basic structure of an RBF neural network according to an embodiment of the present invention;
fig. 2 is a flowchart of a short-term load prediction method of a smart grid based on an improved RBF neural network in the embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a weighted FCM clustering algorithm based on PCA dimension reduction, which is called PCA-FCM in a combined mode to obtain a short-term load prediction result of a smart power grid.
In addition, the iterative calculation involved in the invention refers to successive approximation, a rough approximate value is taken firstly, and then the initial value is repeatedly corrected by using the same or a plurality of formulas until the preset precision requirement is met. For example, the iterations involved in the present invention establish a trained model of the prediction error E, involving the output of the hidden layer, the output of the output layer, updates to the weighting values, and so forth.
Implicit layer center c in the inventioniAlso known as the basis function center ci(ii) a Radius ζ of hidden layer neuronsiAlso known as variance ζi。
Example one
The embodiment provides a smart grid short-term load prediction method based on an improved RBF neural network. The method adopts PCA-WFCM clustering algorithm to determine RBF basis function center ciAnd determining the connection weight of the hidden layer and the output layer of the RBF neural network by adopting a gradient descent method. The method for predicting the short-term load of the smart grid based on the improved RBF neural network is described in detail as follows:
firstly, establishing sample data X ═ X1,x2,…,xN},xjFor each sample point, xj={xj1,xj2,…,xjs1,2, … N for a total of N sample points, xjRepresenting that each sample point contains s attributes, dividing sample data X into K classes in cluster analysis, wherein K is more than or equal to 2 and less than or equal to N; cluster center V ═ V1,v2,…,vkAnd K cluster centers.
The Radial Basis Function (RBF) neural network is a feedforward neural network based on a function approximation theory, and compared with a BP (Back propagation) neural network, the Radial Basis Function (RBF) neural network has the advantages of good function approximation characteristic, simple structure, high training speed and the like. As shown in fig. 1, the RBF neural network is a three-layer neural network composed of an input layer 1, a hidden layer 2, and an output layer 3, and its structure is shown in the following figure.
The core idea of the RBF neural network is to combine radial basis functionsThe hidden layer space is formed as the base of the hidden layer unit, and the input vector is transformed in the hidden layer, so that the linearly inseparable data in the low-dimensional space is linearly separable in the high-dimensional space. The transformation of the input layer to the hidden layer is a non-linear transformation, while the hidden layer to the output layer is a linear transformation. The hidden layer transformation function is a radial basis function which is a non-negative nonlinear function with a radially symmetrical local distribution center. The connection weight between the input layer and the hidden layer of the RBF neural network is 1, the hidden layer completes parameter adjustment of the activation function, and the output layer adjusts the connection weight. In the RBF neural network, there are 3 parameters to be solved: center of basis function ciWidth of the hidden layer, connection weight of the hidden layer to the output layer. The gaussian function is a commonly used basis function in RBF neural networks, so the output of the hidden layer neurons is:
wherein, ciIs the center of the hidden layer, i.e. the center of the ith Gaussian function, NⅡNumber of neurons in the cryptic layer, σiIs the radius of the RBF hidden layer neuron, which can be expressed as:
ζi=τminj‖ci-cj‖,i,j=1,2,…NⅡ; (2)
wherein, ciRepresenting the hidden layer center, NⅡRepresenting the number of hidden layer neurons, η represents the basis function overlap coefficients.
The output of the RBF neural network is a linear combination of the outputs of all hidden layer neurons, and can be expressed as:
where W is the connection weight of the jth hidden layer neuron to the output neuron, RiRepresenting the output of the hidden layer.
In the RBF neural network nonlinear approximation process, after a training sample is given, the algorithm needs to solve the following two key questionsTitle: 1) determining the network structure, i.e. determining the basis function center c of the RBF neural networki(ii) a 2) Adjusting the connection weight omega of the hidden layer and the output layern. The selection of these parameters will affect the prediction performance of the RBF neural network, so before prediction, we need to select the optimal ωnAnd ciThe prediction performance of the RBF neural network is improved.
Connection weight omega of hidden layer and output layernTraining is generally performed by a gradient descent method. A set of input vectors xjJ-1, 2 … O and corresponding output value yjJ ═ 1,2, … O } as training samples, where K is the number of samples. The sum-mean-square error function is then:
to minimize the error function, the connection weight W is:
center of basis function ciAnd determining by adopting a PCA-WFCM clustering method. The PCA-WFCM clustering algorithm is described in detail below.
The PCA-WFCM clustering algorithm is based on the traditional FCM clustering algorithm, reduces algorithm complexity by performing attribute dimension reduction on sample data, and further improves clustering performance by performing weighted FCM clustering by using each attribute variance contribution rate after dimension reduction as an attribute weight. The algorithm comprises two steps, wherein PCA dimensionality reduction is firstly carried out, then weighted FCM clustering is carried out, and each step is respectively introduced below.
Principal Component Analysis (PCA) is a linear dimensionality reduction algorithm that can map an S-dimensional spatial data into L-dimensional subspace, where L < S. mathematical calculations require computation of eigenvectors of the covariance matrix of the original datanN is 1,2, …, N is zero mean data, i.e.The covariance matrix C is defined as:
decomposing by characteristic values to obtain:
wherein Q is [ Q ]1,q2,…,qS]For the set of characteristic vectors, Λ ═ diag (λ)1,λ2,…,λS) For the eigenvalues, the top L eigenvectors U may be utilizedL=[u1,u2,…,uL]Data x of one S dimensioniProjection to L D principal component direction is Y-XQL. Let the characteristic root λ of the covariance matrix C1≥λ2≥…≥λS> 0, definitionAs the contribution rate of the l-th principal component,the accumulated contribution rate of the first L principal components is generally more than 95% to achieve the purpose of dimension reduction and expect small information loss.
The PCA-WFCM algorithm is based on the FCM algorithm, which is a fuzzy clustering algorithm, namely a soft partitioning method. Each sample point cannot be strictly classified into a certain class, but belongs to a certain class with a certain degree of membership. Let uijRepresenting the membership degree of the jth sample point belonging to the ith class, the membership degree matrix and the clustering center are respectively U ═ UijV ═ V } and V ═ Vi}. The PCA-WFCM algorithm takes the importance of the attributes into consideration by giving different weights to the attributes after dimension reduction on the basis of the FCM algorithm. The weight of the attribute adopts the variance contribution rate of each attribute after the PCA reduces the dimension of the original data attribute, and the attribute with larger contribution rate shows that the attribute has larger function in the data set. The weight of the kth attribute after dimensionality reduction:l=1,2…,L。
the goal of the clustering algorithm is to maximize intra-class similarity and minimize inter-class similarity, and the similarity is measured in Euclidean distance. The algorithm therefore determines the cluster center V and the blur matrix U by minimizing an objective function of
Wherein
Wherein d isijIs a sample xjTo the center of the cluster viWeighted euclidean distance of
In the formula (8), m is more than or equal to 1 and is a fuzzy weighting index, the fuzzy degree of the membership degree matrix U is represented, the higher m is, the higher the classified fuzzy degree is, usually m is 2, L represents the dimensionality of each sample point after dimensionality reduction, and Z represents the clustering weight value.
By calculating the differential of (8) and (9), we can obtain uijAnd viIs calculated by the formula
Wherein xjRepresenting the jth sample.
Example two
According to the idea of the first embodiment, the neural network prediction in this embodiment needs to determine the input and output of the neural network and the number of hidden layer nodes in advance. The network inputs are determined by a series of parameters that affect the predicted values. Because of intelligenceThe load curve of the power grid user has good periodic characteristics, so that the daily periodic characteristics and the weekly periodic characteristics can be considered for influencing the load value at a certain moment, namely, the load value at the same moment on the day before the predicted moment and the load value at the same moment in the week before the predicted moment are selected. Specific prediction of input layer neuron number NⅠThe load value of the previous moment of the prediction point L (t-1), the load value of the two moments before the prediction point L (t-2), the load value of the same prediction point L (t-48) on the previous day, the load value of the previous moment of the same prediction point L (t-49) on the previous day, the load value of the next moment after the same prediction point L (t-47) on the previous day, the load value of the same prediction point L (t-48L 07) on the previous week, the load value of the previous moment before the same prediction point L (t-48 × 7-1) on the previous week, the load value of the next moment after the same prediction point L (t-48 × 7+1) on the previous week and a day type parameter, namely whether the load value is the end of the week or not are includedⅢThe load at a certain time is predicted as 1. Number of hidden layer neurons NⅡAnd determining according to the prediction error minimum. All network inputs are processed by maximum and minimum normalization.
As shown in fig. 2, the smart grid short-term load prediction method based on the improved RBF neural network includes:
(1) and (5) initializing the network. Determining the number N of network input layer neurons according to the system input and output sequenceⅠNumber of neurons in hidden layer NⅡNumber of neurons in output layer NⅢAnd initializing the learning rate η and the basis function overlap coefficient η.
(2) Calculating the center c of the RBF basis functioni. Determining the center of the basis function by adopting PCA-WFCM clustering algorithm, wherein the specific process comprises the following steps:
s21, inputting fuzzy index m, iteration stop threshold value and principal component cumulative contribution rate factor, and clustering number, namely number N of centers of basis functionsⅡConnection weight ωnSample data X;
and S22, performing PCA attribute dimension reduction processing on the sample data. According to the formulaL are obtained, all dimensions before L are reserved as cluster attributes, and the cluster attributes are obtained according toFormula (II)n is 1,2, …, L, initializing each cluster attribute, and obtaining a connection weight vector W of the cluster attributesnAnd reduced dimension sample data Xnew(ii) a Wherein, Xnew={x1,x2,…,xgG is the number of sample points, xgFor each sample point, xg ═ xg1,xg2,…,xgsAnd s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point.
S23, initializing a membership matrix U according to the formula (9);
s24, calculating a clustering center V according to the formula (12);
s25, calculating a membership matrix U according to the formula (11);
s26, calculating an objective function J according to the formula (8);
s27, if J(t)-J(t-1)If is, then the cluster center V, i.e., the basis function center c, is obtainedi(ii) a Otherwise, return to step S24;
(3) solving the variance ζ according to equation (2)i。
(4) The output of the hidden layer is computed. According to the sample data X after dimensionality reductionnewHidden layer center ciAnd the variance ζiThe output of the hidden layer is calculated according to equation (1).
(5) The output of the output layer is calculated according to equation (3).
(6) The prediction error E is calculated according to equation (4).
(7) The cluster weight value is updated according to equation (5).
(8) And (4) judging whether the iteration of the algorithm is finished or not, and if not, returning to the step (4).
The method comprises the steps that a PCA-WFCM is adopted to determine the RBF basis function center, load data are subjected to PCA dimension reduction processing to obtain less irrelevant prediction input, so that the overlap of the RBF basis functions is reduced, the basis function center is better determined, and the complexity of a prediction algorithm is reduced; in addition, weighted FCM basis function clustering is adopted, and the variance contribution rates of different attributes obtained after PCA processing are weighted for the attributes after dimensionality reduction, so that clustering accuracy is improved, a more accurate basis function center is obtained, and load prediction accuracy is improved.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (7)
1. A short-term load prediction method of a smart grid based on an improved RBF neural network is characterized by comprising the following steps:
s1, initializing the network;
carrying out dimensionality reduction on sample data of the smart power grid; carrying out PCA row weighting FCM (fuzzy C means) basic function clustering on the sample data subjected to PCA dimensionality reduction, wherein when carrying out weighted FCM basic function clustering on the sample data of the smart grid subjected to PCA dimensionality reduction, weighting the attribute clusters subjected to dimensionality reduction by the variance contribution rates of different attributes obtained in the PCA dimensionality reduction to obtain a basic function center ciThe method comprises the following steps:
s2, calculating the center c of the basis functioni;
S3, according to the center c of the basis functioniCalculating the variance ζi;
The clustering weight value is calculated as follows:
the covariance matrix C is calculated by mapping an S-dimensional space data into L-dimensional subspace, wherein L<<S, let X ═ { X ═ XnIs zero mean data, i.e.Wherein N is 1,2, …, N,t is a rank conversion symbol;
data x of one S dimensioniProjecting in the direction of main component L D, i.e. Y-XQL;
Let the characteristic root λ of the covariance matrix C1≥λ2≥…≥λSAnd is andas the contribution rate of the l-th principal component,cumulative contribution rate for the first L principal components;
set of eigenvectors Q ═ Q1,q2,…,qS]Characteristic value "l ═ diag (λ)1,λ2,…,λS) Making the cumulative contribution rate greater than 95%;
according to the obtained basis function center ciAnd variance ζiPerforming RBF neural network prediction on the sample data after the weighted FCM basis function clustering to obtain output layer output, which is specifically as follows:
s4, according to the center c of the basis functioniAnd the variance ζiComputing the output R of the hidden layeri;
Step S4 includes:
according to the sample data X after dimensionality reductionnewCenter of basis function ciAnd the variance ζiAnd obtaining the output of the hidden layer:
in the formula, NⅡRepresenting the number of hidden layer neurons;
s5, according to implicationOutput of layer RiCalculating the output of the output layer so as to obtain a short-term load prediction result of the intelligent power grid;
s6, calculating a prediction error E according to the mean square error and the function;
s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network;
s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration.
2. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S1 comprises:
determining input layer neuron number NⅠNumber of neurons in hidden layer NⅡNumber of neurons in output layer NⅢAnd initializing a learning rate η and a basis function overlap factor tau, wherein the number of neurons N of the output layerⅢNumber of hidden layer neurons N1ⅡI.e. the number of the centers of the basis functions.
3. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S2 comprises:
s21, inputting fuzzy index m, iteration stop threshold value, PCA cumulative contribution rate factor and number N of basis function centersⅡClustered attribute weight ωnOriginal data X;
wherein, the sample data X ═ { X ═ X1,x2,…,xNN is the number of sample points, xjFor each sample point, xj={xj1,xj2,…,xjsJ is 1,2, … N, s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point, sample data X is divided into K classes in the PCA-WFCM method, and the clustering center V is { V ═ V }1,v2,…,vkK is more than or equal to 2 and less than or equal to N;
s22, conducting PCA attribute dimension reduction processing on the sample data X, and calculating the dimension L of each sample point after dimension reduction according to the following formula:
where S represents the original dimension of each sample point, λnRepresenting the variance contribution rate in the PCA and representing the PCA cumulative contribution rate factor as a characteristic value;
calculating the attribute weight of each cluster according to the following formula:
weighting omega according to clustered attributesnObtaining a weight vector W ═ { omega ═ of the clustering attributesnAnd reduced dimension sample data Xnew;
S23, initializing a membership matrix U:
In the formula uijRepresenting the membership degree of the jth sample point belonging to the ith class, and K representing the number of clustering centers V;
s24, according to the membership matrix U and the sample data X after dimensionality reductionnewCalculating a clustering center V:
wherein m is a blur index, xjRepresents the jth sample point;
s25, according to the sample data X after dimension reductionnewAnd iteratively calculating a membership matrix U:
wherein m is a fuzzy index and represents the fuzzy degree of the membership matrix U, the higher the value of m is, the higher the fuzzy degree of the membership matrix U is, let m be 2, K represents the number of the clustering centers V, dijRepresents each sample point xjTo the center of the cluster viThe weighted euclidean distance of (a) is calculated by the following formula:
l denotes the dimension, ω, of each sample point after dimensionality reductionnRepresenting the attribute weight of the cluster;
s26, calculating an objective function J according to the membership matrix U and the clustering center V:
wherein m is a blur index, dijRepresents each sample point xjTo the center of the cluster viK represents the number of clustering centers V, and N represents the number of sample points;
s27, judging the target function J: if J(t)-J(t-1)If is, then the cluster center V, i.e., the basis function center c, is outputi(ii) a Otherwise, the process returns to step S24 until the formula | J is satisfied(t)-J(t-1)If the value is less than the threshold value, stopping iterative calculation, and calculating a clustering center V, wherein the value represents an iteration stop threshold value, and t represents iteration times.
4. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S3 comprises:
calculating the radius σ of the hidden layer neurons according to the following formulai:
σi=τminj||ci-cj||,i,j=1,2,…NⅡ;
In the formula, ciRepresenting the center of the basis function, NⅡRepresenting the number of hidden layer neurons and tau representing the basis function overlap coefficient.
5. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S5 comprises:
according to the sample data X after dimensionality reductionnewAnd obtaining the output of the output layer:
in the formula, NⅡRepresenting the number of hidden layer neurons; w represents the connection weight of the ith hidden layer neuron to the output layer neuron, RiRepresenting the output of the hidden layer.
6. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S6 comprises:
the prediction error E is calculated using the mean square error sum function:
let a set of input vectors { xjJ-1, 2 … O and corresponding output value yjJ-1, 2, … O as training samples,
7. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S7 comprises:
the connection weight W is updated according to the following formula:
where η denotes the learning rate, E denotes the prediction error, and q denotes the number of updates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611148874.7A CN106600059B (en) | 2016-12-13 | 2016-12-13 | Intelligent power grid short-term load prediction method based on improved RBF neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611148874.7A CN106600059B (en) | 2016-12-13 | 2016-12-13 | Intelligent power grid short-term load prediction method based on improved RBF neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106600059A CN106600059A (en) | 2017-04-26 |
CN106600059B true CN106600059B (en) | 2020-07-24 |
Family
ID=58802260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611148874.7A Active CN106600059B (en) | 2016-12-13 | 2016-12-13 | Intelligent power grid short-term load prediction method based on improved RBF neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106600059B (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330518A (en) * | 2017-06-21 | 2017-11-07 | 国家电网公司 | Energy management control method and system based on temperature adjustment load prediction |
CN107403188A (en) * | 2017-06-28 | 2017-11-28 | 中国农业大学 | A kind of quality evaluation method and device |
CN107194524B (en) * | 2017-07-28 | 2020-05-22 | 合肥工业大学 | RBF neural network-based coal and gas outburst prediction method |
CN107657351A (en) * | 2017-10-26 | 2018-02-02 | 广州泰阳能源科技有限公司 | A kind of load prediction system based on PLC Yu pivot analysis RBF neural |
CN108021935B (en) * | 2017-11-27 | 2024-01-23 | 中国电力科学研究院有限公司 | Dimension reduction method and device based on big data technology |
CN108229754B (en) * | 2018-01-31 | 2021-12-10 | 杭州电子科技大学 | Short-term load prediction method based on similar day segmentation and LM-BP network |
CN108230121B (en) * | 2018-02-09 | 2022-06-10 | 艾凯克斯(嘉兴)信息科技有限公司 | Product design method based on recurrent neural network |
CN108680358A (en) * | 2018-03-23 | 2018-10-19 | 河海大学 | A kind of Wind turbines failure prediction method based on bearing temperature model |
CN108631817B (en) * | 2018-05-10 | 2020-05-19 | 东北大学 | Method for predicting frequency hopping signal frequency band based on time-frequency analysis and radial neural network |
CN108734355B (en) * | 2018-05-24 | 2022-03-08 | 国网福建省电力有限公司 | Short-term power load parallel prediction method and system applied to power quality comprehensive management scene |
CN109309382B (en) * | 2018-09-13 | 2022-03-04 | 广东工业大学 | Short-term power load prediction method |
CN109179133A (en) * | 2018-11-05 | 2019-01-11 | 常熟理工学院 | For prejudging the elevator intelligent maintenance prediction technique and system of failure |
CN109583044B (en) * | 2018-11-09 | 2022-07-15 | 中国直升机设计研究所 | Helicopter rotor flight load prediction method based on RBF neural network |
CN109284876A (en) * | 2018-11-19 | 2019-01-29 | 福州大学 | Based on PCA-RBF Buried Pipeline rate prediction method |
CN110097126B (en) * | 2019-05-07 | 2023-04-21 | 江苏优聚思信息技术有限公司 | Method for checking important personnel and house missing registration based on DBSCAN clustering algorithm |
CN110059824A (en) * | 2019-05-22 | 2019-07-26 | 杭州电子科技大学 | A kind of neural net prediction method based on principal component analysis |
CN110365647B (en) * | 2019-06-13 | 2021-09-14 | 广东工业大学 | False data injection attack detection method based on PCA and BP neural network |
CN110443318B (en) * | 2019-08-09 | 2023-12-08 | 武汉烽火普天信息技术有限公司 | Deep neural network method based on principal component analysis and cluster analysis |
CN110796158A (en) * | 2019-09-10 | 2020-02-14 | 国网浙江省电力有限公司杭州供电公司 | Power grid company classification method based on RBF radial basis function neural network |
CN110824142B (en) * | 2019-11-13 | 2022-06-24 | 杭州鲁尔物联科技有限公司 | Geological disaster prediction method, device and equipment |
CN110852522B (en) * | 2019-11-19 | 2024-03-29 | 南京工程学院 | Short-term power load prediction method and system |
JP7295431B2 (en) * | 2019-11-27 | 2023-06-21 | 富士通株式会社 | Learning program, learning method and learning device |
CN110991638B (en) * | 2019-11-29 | 2024-01-05 | 国网山东省电力公司聊城供电公司 | Generalized load modeling method based on clustering and neural network |
CN111259943A (en) * | 2020-01-10 | 2020-06-09 | 天津大学 | Thermocline prediction method based on machine learning |
CN112580853A (en) * | 2020-11-20 | 2021-03-30 | 国网浙江省电力有限公司台州供电公司 | Bus short-term load prediction method based on radial basis function neural network |
CN112766076B (en) * | 2020-12-31 | 2023-05-12 | 上海电机学院 | Ultra-short-term prediction method, system, equipment and storage medium for power load |
CN113408699A (en) * | 2021-06-16 | 2021-09-17 | 中国地质科学院 | Lithology identification method and system based on improved radial basis function neural network |
CN114875196B (en) * | 2022-07-01 | 2022-09-30 | 北京科技大学 | Method and system for determining converter tapping quantity |
CN115796057A (en) * | 2023-02-06 | 2023-03-14 | 广东电网有限责任公司中山供电局 | Cable joint temperature prediction method and system based on BAS-LSTM |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982393A (en) * | 2012-11-09 | 2013-03-20 | 山东电力集团公司聊城供电公司 | Online prediction method of electric transmission line dynamic capacity |
CN103095494A (en) * | 2012-12-31 | 2013-05-08 | 北京邮电大学 | Risk evaluation method of electric power communication network |
CN103136598A (en) * | 2013-02-26 | 2013-06-05 | 福建省电力有限公司 | Monthly electrical load computer forecasting method based on wavelet analysis |
CN103646354A (en) * | 2013-11-28 | 2014-03-19 | 国家电网公司 | Effective index FCM and RBF neural network-based substation load characteristic categorization method |
CN105678404A (en) * | 2015-12-30 | 2016-06-15 | 东北大学 | Micro-grid load prediction system and method based on electricity purchased on-line and dynamic correlation factor |
CN105787584A (en) * | 2016-01-28 | 2016-07-20 | 华北电力大学(保定) | Wind turbine malfunction early warning method based on cloud platform |
EP3098762A1 (en) * | 2015-05-29 | 2016-11-30 | Samsung Electronics Co., Ltd. | Data-optimized neural network traversal |
-
2016
- 2016-12-13 CN CN201611148874.7A patent/CN106600059B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982393A (en) * | 2012-11-09 | 2013-03-20 | 山东电力集团公司聊城供电公司 | Online prediction method of electric transmission line dynamic capacity |
CN103095494A (en) * | 2012-12-31 | 2013-05-08 | 北京邮电大学 | Risk evaluation method of electric power communication network |
CN103136598A (en) * | 2013-02-26 | 2013-06-05 | 福建省电力有限公司 | Monthly electrical load computer forecasting method based on wavelet analysis |
CN103646354A (en) * | 2013-11-28 | 2014-03-19 | 国家电网公司 | Effective index FCM and RBF neural network-based substation load characteristic categorization method |
EP3098762A1 (en) * | 2015-05-29 | 2016-11-30 | Samsung Electronics Co., Ltd. | Data-optimized neural network traversal |
CN105678404A (en) * | 2015-12-30 | 2016-06-15 | 东北大学 | Micro-grid load prediction system and method based on electricity purchased on-line and dynamic correlation factor |
CN105787584A (en) * | 2016-01-28 | 2016-07-20 | 华北电力大学(保定) | Wind turbine malfunction early warning method based on cloud platform |
Also Published As
Publication number | Publication date |
---|---|
CN106600059A (en) | 2017-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106600059B (en) | Intelligent power grid short-term load prediction method based on improved RBF neural network | |
CN109063911B (en) | Load aggregation grouping prediction method based on gated cycle unit network | |
CN110175386B (en) | Method for predicting temperature of electrical equipment of transformer substation | |
CN111027772B (en) | Multi-factor short-term load prediction method based on PCA-DBILSTM | |
CN108985515B (en) | New energy output prediction method and system based on independent cyclic neural network | |
CN106067034B (en) | Power distribution network load curve clustering method based on high-dimensional matrix characteristic root | |
Li et al. | A novel double incremental learning algorithm for time series prediction | |
CN109492748B (en) | Method for establishing medium-and-long-term load prediction model of power system based on convolutional neural network | |
Han et al. | An improved fuzzy neural network based on T–S model | |
CN109255726A (en) | A kind of ultra-short term wind power prediction method of Hybrid Intelligent Technology | |
CN113641722A (en) | Long-term time series data prediction method based on variant LSTM | |
CN112434848A (en) | Nonlinear weighted combination wind power prediction method based on deep belief network | |
CN110895772A (en) | Electricity sales amount prediction method based on combination of grey correlation analysis and SA-PSO-Elman algorithm | |
CN111355633A (en) | Mobile phone internet traffic prediction method in competition venue based on PSO-DELM algorithm | |
CN112766603A (en) | Traffic flow prediction method, system, computer device and storage medium | |
CN115186803A (en) | Data center computing power load demand combination prediction method and system considering PUE | |
CN110738363B (en) | Photovoltaic power generation power prediction method | |
CN116526450A (en) | Error compensation-based two-stage short-term power load combination prediction method | |
CN113836823A (en) | Load combination prediction method based on load decomposition and optimized bidirectional long-short term memory network | |
CN110471768A (en) | A kind of load predicting method based on fastPCA-ARIMA | |
CN113722980A (en) | Ocean wave height prediction method, system, computer equipment, storage medium and terminal | |
Han et al. | A Hybrid BPNN-GARF-SVR PredictionModel Based on EEMD for Ship Motion. | |
Bao et al. | Restricted Boltzmann Machine‐Assisted Estimation of Distribution Algorithm for Complex Problems | |
CN111079902A (en) | Decomposition fuzzy system optimization method and device based on neural network | |
CN114330119B (en) | Deep learning-based extraction and storage unit adjusting system identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |