CN106600059B - Intelligent power grid short-term load prediction method based on improved RBF neural network - Google Patents

Intelligent power grid short-term load prediction method based on improved RBF neural network Download PDF

Info

Publication number
CN106600059B
CN106600059B CN201611148874.7A CN201611148874A CN106600059B CN 106600059 B CN106600059 B CN 106600059B CN 201611148874 A CN201611148874 A CN 201611148874A CN 106600059 B CN106600059 B CN 106600059B
Authority
CN
China
Prior art keywords
center
output
neural network
hidden layer
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611148874.7A
Other languages
Chinese (zh)
Other versions
CN106600059A (en
Inventor
张天魁
鲁云
肖霖
杨鼎成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Beijing University of Posts and Telecommunications
Original Assignee
Nanchang University
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University, Beijing University of Posts and Telecommunications filed Critical Nanchang University
Priority to CN201611148874.7A priority Critical patent/CN106600059B/en
Publication of CN106600059A publication Critical patent/CN106600059A/en
Application granted granted Critical
Publication of CN106600059B publication Critical patent/CN106600059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation, e.g. linear programming, "travelling salesman problem" or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • G06N3/0481Non-linear activation functions, e.g. sigmoids, thresholds
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Abstract

The invention discloses a smart grid short-term load prediction method based on an improved RBF neural network, relates to the technical field of smart grids, and is used for determining a basis function center and improving the smart grid load prediction precision. The prediction method comprises the following steps: initializing a network; s2, calculating the center c of the basis functioni(ii) a S3, according to the center c of the basis functioniCalculating the variance ζi(ii) a S4, according to the center c of the basis functioniAnd the variance ζiComputing the output R of the hidden layeri(ii) a S5, output R from hidden layeriCalculating the output of the output layer; s6, calculating a prediction error E according to the mean square error and the function; s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network; s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration. The method and the device are used for predicting the load of the power grid.

Description

Intelligent power grid short-term load prediction method based on improved RBF neural network
Technical Field
The invention relates to the technical field of smart grids, in particular to a smart grid short-term load prediction method based on an improved RBF neural network.
Background
The rapid development of the smart grid generates a large amount of power consumption data (also called sample data), and the analysis of the sample data is of great significance. The sample data is applied to short-term load prediction by using a prediction method, so that the load prediction precision is improved, and the method plays an important role in safe scheduling and economic operation of the power system. The Radial Basis Function (RBF) neural network is the most widely applied prediction method in load prediction, because it is a local approximation network, can approximate any continuous Function with any precision, has the unique and best approximation characteristic, has no local minimum problem, and has simple topological structure and fast learning rate. The RBF neural network prediction method mainly comprises three parameters which influence the prediction precision and are respectively a basis function center, a basis function radius and a connection weight of a network hidden layer and an output layer. Wherein the connection weight is usually obtained by a gradient descent method. The influence of the base function center and the base function radius on the prediction accuracy is very large, so the existing research mainly focuses on how to determine the base function center and the base function radius of the RBF neural network. In the prior art, the following method is mainly adopted to calculate the center and radius of the basis function:
the first method is to calculate the center of the basis function and the radius of the basis function by using a clustering method (for example, a K-means method and an FCM method); the second uses heuristics (e.g., such as genetic methods and particle swarm methods) to compute the basis function centers and the basis function radii. The heuristic method is high in complexity and long in prediction time under the large-scale load data of the intelligent power grid, so that the clustering method is more suitable for determining the RBF neural network basis function center and the basis function radius under the large-scale load prediction of the intelligent power grid.
In addition, the RBF neural network basis function center is mainly determined by adopting an FCM method, but in the intelligent power grid load prediction, the FCM method is large in load data scale, multiple in dimensionality and complex in method, and finally the intelligent power grid load prediction accuracy is low.
Disclosure of Invention
The invention aims to provide a smart grid short-term load prediction method based on an improved RBF neural network, which is used for determining a basis function center and improving the smart grid load prediction precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a smart grid short-term load prediction method based on an improved RBF neural network, which comprises the following steps:
s1, initializing the network;
s2, calculating the center c of the basis functioni
S3, according to the center c of the basis functioniCalculating the variance ζi
S4, according to the center c of the basis functioniAnd the variance ζiComputing the output R of the hidden layeri
S5, output R from hidden layeriCalculating the output of the output layer;
s6, calculating a prediction error E according to the mean square error and the function;
s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network;
s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration.
Step S1 includes: determining input layer neuron number NNumber of neurons in hidden layer NNumber of neurons in output layer NAnd initializing a learning rate η and a basis function overlap coefficient η, wherein the number N of neurons in the output layerNumber of hidden layer neurons N1I.e. the number of the centers of the basis functions.
Step S2 includes:
s21, S21, input fuzzy index m, iteration stop threshold value, PCA cumulative contribution rate factor and number N of basis function centersClustered attribute weight ωnOriginal data X; (ii) a Wherein, the sample data X ═ { X ═ X1,x2,…,xNN is the number of sample points, xjFor each sample point, xj={xj1,xj2,…,xjsJ is 1,2, … N, s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point, sample data X is divided into K classes in the PCA-WFCM method, and the clustering center V is { V ═ V }1,v2,…,vkK is more than or equal to 2 and less than or equal to N.
S22, PCA attribute dimension reduction is carried out on the sample data XAccording to the following formula, the dimension L after dimension reduction of each sample point is calculatedAll dimensions before dimension L are retained as cluster attributes, where S represents the original dimension of each sample point, λnRepresenting the eigenvalue of covariance matrix in PCA, representing PCA cumulative contribution factor; calculating the attribute weight of each cluster according to the following formula:n is 1,2, …, L, and weight omega is weighted according to the property of the clusternObtaining a weight vector W ═ { omega ═ of the clustering attributesnAnd reduced dimension sample data Xnew;。
S23, initializing a membership matrix U: let u be equal to or less than 0ij≤1,U={uijIn the formula, uijAnd K represents the number of clustering centers V.
S24, according to the membership matrix U and the sample data X after dimensionality reductionnewCalculating a clustering center V:V={viwhere m is the fuzzy index, xjRepresenting the jth sample point.
S25, according to the sample data X after dimension reductionnewAnd iteratively calculating a membership matrix U:U={uijin the formula, m is a fuzzy index and represents the fuzzy degree of the membership degree matrix U, the larger the value of m is, the higher the fuzzy degree of the membership degree matrix U is, let m be 2, K represent the number of clustering centers V, dijRepresents each sample point xjTo the center of the cluster viThe weighted Euclidean distance of (c) is calculated by the following formula, L represents eachDimension, omega, of sample points after dimensionality reductionnThe attribute weight of the cluster is represented.
S26, calculating an objective function J according to the membership matrix U and the clustering center V:wherein m is a blur index, dijRepresents each sample point xjTo the center of the cluster viK represents the number of clustering centers V, and N represents the number of sample points.
S27, judging the target function J: if J(t)-J(t-1)|<Then the cluster center V, i.e., the basis function center c, is outputi(ii) a Otherwise, the process returns to step S24 until the formula | J is satisfied(t)-J(t-1)|<Stopping iterative calculation and calculating a clustering center V; where, the iteration stop threshold is represented, and t represents the number of iterations.
Step S3 includes: calculating the radius ζ of hidden layer neurons according to the following formulai:ζi=λminj‖ci-cj‖,i,j=1,2,…N(ii) a In the formula, ciRepresenting the center of the basis function, NRepresenting the number of hidden layer neurons, η represents the basis function overlap coefficients.
Step S4 includes: according to the sample data X after dimensionality reductionnewCenter of basis function ciAnd the variance ζiAnd obtaining the output of the hidden layer:i=1,2,…Nin the formula, NRepresenting the number of hidden layer neurons.
Step S5 includes: according to the sample data X after dimensionality reductionnewAnd obtaining the output of the output layer:in the formula, NRepresenting the number of hidden layer neurons; w represents the connection weight of the ith hidden layer neuron to the output layer neuron, RiOutput representing hidden layer。
Step S6 includes: the prediction error E is calculated using the mean square error sum function: let a set of input vectors { xjJ-1, 2 … O and corresponding output value yjJ-1, 2, … O as training samples,
wherein O is the number of samples and the prediction error
Step S7 includes: the connection weight W is updated according to the following formula:
where η denotes the learning rate, E denotes the prediction error, and q denotes the number of updates.
The clustering weight Z is calculated as follows:
the covariance matrix C is calculated by mapping an S-dimensional space data into L-dimensional subspace, wherein L<<S, let X ═ { X ═ XnIs zero mean data, i.e.Wherein N is 1,2, …, N,t is a rank conversion symbol; and (3) carrying out eigenvalue decomposition on the covariance matrix C:data x of one S dimensioniProjecting in the direction of main component L D, i.e. Y-XQL(ii) a Let the characteristic root λ of the covariance matrix C1≥λ2≥…≥λSAnd is andas the contribution rate of the l-th principal component,the cumulative contribution rate of the first L principal components, and the second after dimensionality reductionClustering attribute weight of k attributes:1,2 …, L, and a feature vector set Q ═ Q1,q2,…,qS]Characteristic value "l ═ diag (λ)12,…,λS) The cumulative contribution rate is made to be greater than 95%.
The method comprises the steps that a PCA-WFCM is adopted to determine the RBF basis function center, load data are subjected to PCA dimension reduction processing to obtain less irrelevant prediction input, so that the overlap of the RBF basis functions is reduced, the basis function center is better determined, and the complexity of a prediction algorithm is reduced; in addition, weighted FCM basis function clustering is adopted, and the variance contribution rates of different attributes obtained after PCA processing are weighted for the attributes after dimensionality reduction, so that clustering accuracy is improved, a more accurate basis function center is obtained, and load prediction accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a basic structure of an RBF neural network according to an embodiment of the present invention;
fig. 2 is a flowchart of a short-term load prediction method of a smart grid based on an improved RBF neural network in the embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a weighted FCM clustering algorithm based on PCA dimension reduction, which is called PCA-FCM in a combined mode to obtain a short-term load prediction result of a smart power grid.
In addition, the iterative calculation involved in the invention refers to successive approximation, a rough approximate value is taken firstly, and then the initial value is repeatedly corrected by using the same or a plurality of formulas until the preset precision requirement is met. For example, the iterations involved in the present invention establish a trained model of the prediction error E, involving the output of the hidden layer, the output of the output layer, updates to the weighting values, and so forth.
Implicit layer center c in the inventioniAlso known as the basis function center ci(ii) a Radius ζ of hidden layer neuronsiAlso known as variance ζi
Example one
The embodiment provides a smart grid short-term load prediction method based on an improved RBF neural network. The method adopts PCA-WFCM clustering algorithm to determine RBF basis function center ciAnd determining the connection weight of the hidden layer and the output layer of the RBF neural network by adopting a gradient descent method. The method for predicting the short-term load of the smart grid based on the improved RBF neural network is described in detail as follows:
firstly, establishing sample data X ═ X1,x2,…,xN},xjFor each sample point, xj={xj1,xj2,…,xjs1,2, … N for a total of N sample points, xjRepresenting that each sample point contains s attributes, dividing sample data X into K classes in cluster analysis, wherein K is more than or equal to 2 and less than or equal to N; cluster center V ═ V1,v2,…,vkAnd K cluster centers.
The Radial Basis Function (RBF) neural network is a feedforward neural network based on a function approximation theory, and compared with a BP (Back propagation) neural network, the Radial Basis Function (RBF) neural network has the advantages of good function approximation characteristic, simple structure, high training speed and the like. As shown in fig. 1, the RBF neural network is a three-layer neural network composed of an input layer 1, a hidden layer 2, and an output layer 3, and its structure is shown in the following figure.
The core idea of the RBF neural network is to combine radial basis functionsThe hidden layer space is formed as the base of the hidden layer unit, and the input vector is transformed in the hidden layer, so that the linearly inseparable data in the low-dimensional space is linearly separable in the high-dimensional space. The transformation of the input layer to the hidden layer is a non-linear transformation, while the hidden layer to the output layer is a linear transformation. The hidden layer transformation function is a radial basis function which is a non-negative nonlinear function with a radially symmetrical local distribution center. The connection weight between the input layer and the hidden layer of the RBF neural network is 1, the hidden layer completes parameter adjustment of the activation function, and the output layer adjusts the connection weight. In the RBF neural network, there are 3 parameters to be solved: center of basis function ciWidth of the hidden layer, connection weight of the hidden layer to the output layer. The gaussian function is a commonly used basis function in RBF neural networks, so the output of the hidden layer neurons is:
wherein, ciIs the center of the hidden layer, i.e. the center of the ith Gaussian function, NNumber of neurons in the cryptic layer, σiIs the radius of the RBF hidden layer neuron, which can be expressed as:
ζi=τminj‖ci-cj‖,i,j=1,2,…N; (2)
wherein, ciRepresenting the hidden layer center, NRepresenting the number of hidden layer neurons, η represents the basis function overlap coefficients.
The output of the RBF neural network is a linear combination of the outputs of all hidden layer neurons, and can be expressed as:
where W is the connection weight of the jth hidden layer neuron to the output neuron, RiRepresenting the output of the hidden layer.
In the RBF neural network nonlinear approximation process, after a training sample is given, the algorithm needs to solve the following two key questionsTitle: 1) determining the network structure, i.e. determining the basis function center c of the RBF neural networki(ii) a 2) Adjusting the connection weight omega of the hidden layer and the output layern. The selection of these parameters will affect the prediction performance of the RBF neural network, so before prediction, we need to select the optimal ωnAnd ciThe prediction performance of the RBF neural network is improved.
Connection weight omega of hidden layer and output layernTraining is generally performed by a gradient descent method. A set of input vectors xjJ-1, 2 … O and corresponding output value yjJ ═ 1,2, … O } as training samples, where K is the number of samples. The sum-mean-square error function is then:
to minimize the error function, the connection weight W is:
center of basis function ciAnd determining by adopting a PCA-WFCM clustering method. The PCA-WFCM clustering algorithm is described in detail below.
The PCA-WFCM clustering algorithm is based on the traditional FCM clustering algorithm, reduces algorithm complexity by performing attribute dimension reduction on sample data, and further improves clustering performance by performing weighted FCM clustering by using each attribute variance contribution rate after dimension reduction as an attribute weight. The algorithm comprises two steps, wherein PCA dimensionality reduction is firstly carried out, then weighted FCM clustering is carried out, and each step is respectively introduced below.
Principal Component Analysis (PCA) is a linear dimensionality reduction algorithm that can map an S-dimensional spatial data into L-dimensional subspace, where L < S. mathematical calculations require computation of eigenvectors of the covariance matrix of the original datanN is 1,2, …, N is zero mean data, i.e.The covariance matrix C is defined as:
decomposing by characteristic values to obtain:
wherein Q is [ Q ]1,q2,…,qS]For the set of characteristic vectors, Λ ═ diag (λ)12,…,λS) For the eigenvalues, the top L eigenvectors U may be utilizedL=[u1,u2,…,uL]Data x of one S dimensioniProjection to L D principal component direction is Y-XQL. Let the characteristic root λ of the covariance matrix C1≥λ2≥…≥λS> 0, definitionAs the contribution rate of the l-th principal component,the accumulated contribution rate of the first L principal components is generally more than 95% to achieve the purpose of dimension reduction and expect small information loss.
The PCA-WFCM algorithm is based on the FCM algorithm, which is a fuzzy clustering algorithm, namely a soft partitioning method. Each sample point cannot be strictly classified into a certain class, but belongs to a certain class with a certain degree of membership. Let uijRepresenting the membership degree of the jth sample point belonging to the ith class, the membership degree matrix and the clustering center are respectively U ═ UijV ═ V } and V ═ Vi}. The PCA-WFCM algorithm takes the importance of the attributes into consideration by giving different weights to the attributes after dimension reduction on the basis of the FCM algorithm. The weight of the attribute adopts the variance contribution rate of each attribute after the PCA reduces the dimension of the original data attribute, and the attribute with larger contribution rate shows that the attribute has larger function in the data set. The weight of the kth attribute after dimensionality reduction:l=1,2…,L。
the goal of the clustering algorithm is to maximize intra-class similarity and minimize inter-class similarity, and the similarity is measured in Euclidean distance. The algorithm therefore determines the cluster center V and the blur matrix U by minimizing an objective function of
Wherein
Wherein d isijIs a sample xjTo the center of the cluster viWeighted euclidean distance of
In the formula (8), m is more than or equal to 1 and is a fuzzy weighting index, the fuzzy degree of the membership degree matrix U is represented, the higher m is, the higher the classified fuzzy degree is, usually m is 2, L represents the dimensionality of each sample point after dimensionality reduction, and Z represents the clustering weight value.
By calculating the differential of (8) and (9), we can obtain uijAnd viIs calculated by the formula
Wherein xjRepresenting the jth sample.
Example two
According to the idea of the first embodiment, the neural network prediction in this embodiment needs to determine the input and output of the neural network and the number of hidden layer nodes in advance. The network inputs are determined by a series of parameters that affect the predicted values. Because of intelligenceThe load curve of the power grid user has good periodic characteristics, so that the daily periodic characteristics and the weekly periodic characteristics can be considered for influencing the load value at a certain moment, namely, the load value at the same moment on the day before the predicted moment and the load value at the same moment in the week before the predicted moment are selected. Specific prediction of input layer neuron number NThe load value of the previous moment of the prediction point L (t-1), the load value of the two moments before the prediction point L (t-2), the load value of the same prediction point L (t-48) on the previous day, the load value of the previous moment of the same prediction point L (t-49) on the previous day, the load value of the next moment after the same prediction point L (t-47) on the previous day, the load value of the same prediction point L (t-48L 07) on the previous week, the load value of the previous moment before the same prediction point L (t-48 × 7-1) on the previous week, the load value of the next moment after the same prediction point L (t-48 × 7+1) on the previous week and a day type parameter, namely whether the load value is the end of the week or not are includedThe load at a certain time is predicted as 1. Number of hidden layer neurons NAnd determining according to the prediction error minimum. All network inputs are processed by maximum and minimum normalization.
As shown in fig. 2, the smart grid short-term load prediction method based on the improved RBF neural network includes:
(1) and (5) initializing the network. Determining the number N of network input layer neurons according to the system input and output sequenceNumber of neurons in hidden layer NNumber of neurons in output layer NAnd initializing the learning rate η and the basis function overlap coefficient η.
(2) Calculating the center c of the RBF basis functioni. Determining the center of the basis function by adopting PCA-WFCM clustering algorithm, wherein the specific process comprises the following steps:
s21, inputting fuzzy index m, iteration stop threshold value and principal component cumulative contribution rate factor, and clustering number, namely number N of centers of basis functionsConnection weight ωnSample data X;
and S22, performing PCA attribute dimension reduction processing on the sample data. According to the formulaL are obtained, all dimensions before L are reserved as cluster attributes, and the cluster attributes are obtained according toFormula (II)n is 1,2, …, L, initializing each cluster attribute, and obtaining a connection weight vector W of the cluster attributesnAnd reduced dimension sample data Xnew(ii) a Wherein, Xnew={x1,x2,…,xgG is the number of sample points, xgFor each sample point, xg ═ xg1,xg2,…,xgsAnd s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point.
S23, initializing a membership matrix U according to the formula (9);
s24, calculating a clustering center V according to the formula (12);
s25, calculating a membership matrix U according to the formula (11);
s26, calculating an objective function J according to the formula (8);
s27, if J(t)-J(t-1)If is, then the cluster center V, i.e., the basis function center c, is obtainedi(ii) a Otherwise, return to step S24;
(3) solving the variance ζ according to equation (2)i
(4) The output of the hidden layer is computed. According to the sample data X after dimensionality reductionnewHidden layer center ciAnd the variance ζiThe output of the hidden layer is calculated according to equation (1).
(5) The output of the output layer is calculated according to equation (3).
(6) The prediction error E is calculated according to equation (4).
(7) The cluster weight value is updated according to equation (5).
(8) And (4) judging whether the iteration of the algorithm is finished or not, and if not, returning to the step (4).
The method comprises the steps that a PCA-WFCM is adopted to determine the RBF basis function center, load data are subjected to PCA dimension reduction processing to obtain less irrelevant prediction input, so that the overlap of the RBF basis functions is reduced, the basis function center is better determined, and the complexity of a prediction algorithm is reduced; in addition, weighted FCM basis function clustering is adopted, and the variance contribution rates of different attributes obtained after PCA processing are weighted for the attributes after dimensionality reduction, so that clustering accuracy is improved, a more accurate basis function center is obtained, and load prediction accuracy is improved.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (7)

1. A short-term load prediction method of a smart grid based on an improved RBF neural network is characterized by comprising the following steps:
s1, initializing the network;
carrying out dimensionality reduction on sample data of the smart power grid; carrying out PCA row weighting FCM (fuzzy C means) basic function clustering on the sample data subjected to PCA dimensionality reduction, wherein when carrying out weighted FCM basic function clustering on the sample data of the smart grid subjected to PCA dimensionality reduction, weighting the attribute clusters subjected to dimensionality reduction by the variance contribution rates of different attributes obtained in the PCA dimensionality reduction to obtain a basic function center ciThe method comprises the following steps:
s2, calculating the center c of the basis functioni
S3, according to the center c of the basis functioniCalculating the variance ζi
The clustering weight value is calculated as follows:
the covariance matrix C is calculated by mapping an S-dimensional space data into L-dimensional subspace, wherein L<<S, let X ═ { X ═ XnIs zero mean data, i.e.Wherein N is 1,2, …, N,t is a rank conversion symbol;
and (3) carrying out eigenvalue decomposition on the covariance matrix C:
data x of one S dimensioniProjecting in the direction of main component L D, i.e. Y-XQL
Let the characteristic root λ of the covariance matrix C1≥λ2≥…≥λSAnd is andas the contribution rate of the l-th principal component,cumulative contribution rate for the first L principal components;
the cluster attribute weight of the kth attribute after dimensionality reduction:
set of eigenvectors Q ═ Q1,q2,…,qS]Characteristic value "l ═ diag (λ)12,…,λS) Making the cumulative contribution rate greater than 95%;
according to the obtained basis function center ciAnd variance ζiPerforming RBF neural network prediction on the sample data after the weighted FCM basis function clustering to obtain output layer output, which is specifically as follows:
s4, according to the center c of the basis functioniAnd the variance ζiComputing the output R of the hidden layeri
Step S4 includes:
according to the sample data X after dimensionality reductionnewCenter of basis function ciAnd the variance ζiAnd obtaining the output of the hidden layer:
in the formula, NRepresenting the number of hidden layer neurons;
s5, according to implicationOutput of layer RiCalculating the output of the output layer so as to obtain a short-term load prediction result of the intelligent power grid;
s6, calculating a prediction error E according to the mean square error and the function;
s7, updating the connection weight of the hidden layer neuron and the output layer neuron in the neural network;
s8, judging the prediction error E, and if the prediction error E is within an expectation range, ending the iterative computation; otherwise, the process returns to step S4, and the prediction error E is calculated again by iteration.
2. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S1 comprises:
determining input layer neuron number NNumber of neurons in hidden layer NNumber of neurons in output layer NAnd initializing a learning rate η and a basis function overlap factor tau, wherein the number of neurons N of the output layerNumber of hidden layer neurons N1I.e. the number of the centers of the basis functions.
3. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S2 comprises:
s21, inputting fuzzy index m, iteration stop threshold value, PCA cumulative contribution rate factor and number N of basis function centersClustered attribute weight ωnOriginal data X;
wherein, the sample data X ═ { X ═ X1,x2,…,xNN is the number of sample points, xjFor each sample point, xj={xj1,xj2,…,xjsJ is 1,2, … N, s represents the number of attributes contained in each sample point, i.e. the dimension of each sample point, sample data X is divided into K classes in the PCA-WFCM method, and the clustering center V is { V ═ V }1,v2,…,vkK is more than or equal to 2 and less than or equal to N;
s22, conducting PCA attribute dimension reduction processing on the sample data X, and calculating the dimension L of each sample point after dimension reduction according to the following formula:
retain all dimensions before dimension L as cluster attributes;
where S represents the original dimension of each sample point, λnRepresenting the variance contribution rate in the PCA and representing the PCA cumulative contribution rate factor as a characteristic value;
calculating the attribute weight of each cluster according to the following formula:
weighting omega according to clustered attributesnObtaining a weight vector W ═ { omega ═ of the clustering attributesnAnd reduced dimension sample data Xnew
S23, initializing a membership matrix U:
let u be equal to or less than 0ij≤1,U={uij};
In the formula uijRepresenting the membership degree of the jth sample point belonging to the ith class, and K representing the number of clustering centers V;
s24, according to the membership matrix U and the sample data X after dimensionality reductionnewCalculating a clustering center V:
V={vi};
wherein m is a blur index, xjRepresents the jth sample point;
s25, according to the sample data X after dimension reductionnewAnd iteratively calculating a membership matrix U:
U={uij};
wherein m is a fuzzy index and represents the fuzzy degree of the membership matrix U, the higher the value of m is, the higher the fuzzy degree of the membership matrix U is, let m be 2, K represents the number of the clustering centers V, dijRepresents each sample point xjTo the center of the cluster viThe weighted euclidean distance of (a) is calculated by the following formula:
l denotes the dimension, ω, of each sample point after dimensionality reductionnRepresenting the attribute weight of the cluster;
s26, calculating an objective function J according to the membership matrix U and the clustering center V:
wherein m is a blur index, dijRepresents each sample point xjTo the center of the cluster viK represents the number of clustering centers V, and N represents the number of sample points;
s27, judging the target function J: if J(t)-J(t-1)If is, then the cluster center V, i.e., the basis function center c, is outputi(ii) a Otherwise, the process returns to step S24 until the formula | J is satisfied(t)-J(t-1)If the value is less than the threshold value, stopping iterative calculation, and calculating a clustering center V, wherein the value represents an iteration stop threshold value, and t represents iteration times.
4. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S3 comprises:
calculating the radius σ of the hidden layer neurons according to the following formulai
σi=τminj||ci-cj||,i,j=1,2,…N
In the formula, ciRepresenting the center of the basis function, NRepresenting the number of hidden layer neurons and tau representing the basis function overlap coefficient.
5. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S5 comprises:
according to the sample data X after dimensionality reductionnewAnd obtaining the output of the output layer:
in the formula, NRepresenting the number of hidden layer neurons; w represents the connection weight of the ith hidden layer neuron to the output layer neuron, RiRepresenting the output of the hidden layer.
6. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S6 comprises:
the prediction error E is calculated using the mean square error sum function:
let a set of input vectors { xjJ-1, 2 … O and corresponding output value yjJ-1, 2, … O as training samples,
wherein O is the number of samples and the prediction error
7. The smart grid short-term load prediction method based on the improved RBF neural network as claimed in claim 1, wherein step S7 comprises:
the connection weight W is updated according to the following formula:
where η denotes the learning rate, E denotes the prediction error, and q denotes the number of updates.
CN201611148874.7A 2016-12-13 2016-12-13 Intelligent power grid short-term load prediction method based on improved RBF neural network Active CN106600059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611148874.7A CN106600059B (en) 2016-12-13 2016-12-13 Intelligent power grid short-term load prediction method based on improved RBF neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611148874.7A CN106600059B (en) 2016-12-13 2016-12-13 Intelligent power grid short-term load prediction method based on improved RBF neural network

Publications (2)

Publication Number Publication Date
CN106600059A CN106600059A (en) 2017-04-26
CN106600059B true CN106600059B (en) 2020-07-24

Family

ID=58802260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611148874.7A Active CN106600059B (en) 2016-12-13 2016-12-13 Intelligent power grid short-term load prediction method based on improved RBF neural network

Country Status (1)

Country Link
CN (1) CN106600059B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330518A (en) * 2017-06-21 2017-11-07 国家电网公司 Energy management control method and system based on temperature adjustment load prediction
CN107403188A (en) * 2017-06-28 2017-11-28 中国农业大学 A kind of quality evaluation method and device
CN107194524B (en) * 2017-07-28 2020-05-22 合肥工业大学 RBF neural network-based coal and gas outburst prediction method
CN108229754A (en) * 2018-01-31 2018-06-29 杭州电子科技大学 Short-term load forecasting method based on similar day segmentation and LM-BP networks
CN108230121A (en) * 2018-02-09 2018-06-29 艾凯克斯(嘉兴)信息科技有限公司 A kind of product design method based on Recognition with Recurrent Neural Network
CN108680358A (en) * 2018-03-23 2018-10-19 河海大学 A kind of Wind turbines failure prediction method based on bearing temperature model
CN108631817B (en) * 2018-05-10 2020-05-19 东北大学 Method for predicting frequency hopping signal frequency band based on time-frequency analysis and radial neural network
CN109179133A (en) * 2018-11-05 2019-01-11 常熟理工学院 For prejudging the elevator intelligent maintenance prediction technique and system of failure
CN109284876A (en) * 2018-11-19 2019-01-29 福州大学 Based on PCA-RBF Buried Pipeline rate prediction method
CN110059824A (en) * 2019-05-22 2019-07-26 杭州电子科技大学 A kind of neural net prediction method based on principal component analysis
CN110365647A (en) * 2019-06-13 2019-10-22 广东工业大学 A kind of false data detection method for injection attack based on PCA and BP neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982393A (en) * 2012-11-09 2013-03-20 山东电力集团公司聊城供电公司 Online prediction method of electric transmission line dynamic capacity
CN103095494A (en) * 2012-12-31 2013-05-08 北京邮电大学 Risk evaluation method of electric power communication network
CN103136598A (en) * 2013-02-26 2013-06-05 福建省电力有限公司 Monthly electrical load computer forecasting method based on wavelet analysis
CN103646354A (en) * 2013-11-28 2014-03-19 国家电网公司 Effective index FCM and RBF neural network-based substation load characteristic categorization method
CN105678404A (en) * 2015-12-30 2016-06-15 东北大学 Micro-grid load prediction system and method based on electricity purchased on-line and dynamic correlation factor
CN105787584A (en) * 2016-01-28 2016-07-20 华北电力大学(保定) Wind turbine malfunction early warning method based on cloud platform
EP3098762A1 (en) * 2015-05-29 2016-11-30 Samsung Electronics Co., Ltd. Data-optimized neural network traversal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982393A (en) * 2012-11-09 2013-03-20 山东电力集团公司聊城供电公司 Online prediction method of electric transmission line dynamic capacity
CN103095494A (en) * 2012-12-31 2013-05-08 北京邮电大学 Risk evaluation method of electric power communication network
CN103136598A (en) * 2013-02-26 2013-06-05 福建省电力有限公司 Monthly electrical load computer forecasting method based on wavelet analysis
CN103646354A (en) * 2013-11-28 2014-03-19 国家电网公司 Effective index FCM and RBF neural network-based substation load characteristic categorization method
EP3098762A1 (en) * 2015-05-29 2016-11-30 Samsung Electronics Co., Ltd. Data-optimized neural network traversal
CN105678404A (en) * 2015-12-30 2016-06-15 东北大学 Micro-grid load prediction system and method based on electricity purchased on-line and dynamic correlation factor
CN105787584A (en) * 2016-01-28 2016-07-20 华北电力大学(保定) Wind turbine malfunction early warning method based on cloud platform

Also Published As

Publication number Publication date
CN106600059A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
Chang et al. An improved neural network-based approach for short-term wind speed and power forecast
Zhang et al. Short-term wind speed forecasting using empirical mode decomposition and feature selection
Li et al. Applying various algorithms for species distribution modelling
CN103049792B (en) Deep-neural-network distinguish pre-training
Stoyanov et al. Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure
Nowlan Maximum likelihood competitive learning
Kang et al. A weight-incorporated similarity-based clustering ensemble method based on swarm intelligence
Poloni et al. Hybridization of a multi-objective genetic algorithm, a neural network and a classical optimizer for a complex design problem in fluid dynamics
Pai System reliability forecasting by support vector machines with genetic algorithms
Angelov et al. A new type of simplified fuzzy rule-based system
Juang et al. A locally recurrent fuzzy neural network with support vector regression for dynamic-system modeling
Anastasakis et al. The development of self-organization techniques in modelling: a review of the group method of data handling (GMDH)
Chen et al. Particle swarm optimization aided orthogonal forward regression for unified data modeling
Moreno et al. Wind speed forecasting approach based on singular spectrum analysis and adaptive neuro fuzzy inference system
CN101414366B (en) Method for forecasting electric power system short-term load based on method for improving uttermost learning machine
Pan et al. A comparison of neural network backpropagation algorithms for electricity load forecasting
US7340440B2 (en) Hybrid neural network generation system and method
Abdoos et al. Short term load forecasting using a hybrid intelligent method
Abraham et al. Modeling chaotic behavior of stock indices using intelligent paradigms
CN102622418B (en) Prediction device and equipment based on BP (Back Propagation) nerve network
CN105488528B (en) Neural network image classification method based on improving expert inquiry method
CN104361393A (en) Method for using improved neural network model based on particle swarm optimization for data prediction
US20130138589A1 (en) Exploiting sparseness in training deep neural networks
Li et al. Deep reinforcement learning: Framework, applications, and embedded implementations
Kavousi-Fard Modeling uncertainty in tidal current forecast using prediction interval-based SVR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant