CN110808581B - Active power distribution network power quality prediction method based on DBN-SVM - Google Patents
Active power distribution network power quality prediction method based on DBN-SVM Download PDFInfo
- Publication number
- CN110808581B CN110808581B CN201911023579.2A CN201911023579A CN110808581B CN 110808581 B CN110808581 B CN 110808581B CN 201911023579 A CN201911023579 A CN 201911023579A CN 110808581 B CN110808581 B CN 110808581B
- Authority
- CN
- China
- Prior art keywords
- layer
- dbn
- node
- rbm
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009826 distribution Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 43
- 238000010606 normalization Methods 0.000 claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 13
- 238000004458 analytical method Methods 0.000 claims abstract description 5
- 230000007613 environmental effect Effects 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000010410 layer Substances 0.000 claims description 116
- 230000006870 function Effects 0.000 claims description 34
- 238000012360 testing method Methods 0.000 claims description 16
- 238000012795 verification Methods 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 11
- 230000014509 gene expression Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 7
- 239000002356 single layer Substances 0.000 claims description 6
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 235000006679 Mentha X verticillata Nutrition 0.000 claims description 2
- 235000002899 Mentha suaveolens Nutrition 0.000 claims description 2
- 235000001636 Mentha x rotundifolia Nutrition 0.000 claims description 2
- ODKSFYDXXFIFQN-UHFFFAOYSA-M argininate Chemical compound [O-]C(=O)C(N)CCCNC(N)=N ODKSFYDXXFIFQN-UHFFFAOYSA-M 0.000 claims description 2
- 238000003491 array Methods 0.000 claims description 2
- 238000005315 distribution function Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 3
- 238000012706 support-vector machine Methods 0.000 description 31
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011217 control strategy Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000004870 electrical engineering Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/38—Arrangements for parallely feeding a single network by two or more generators, converters or transformers
- H02J3/381—Dispersed generators
Landscapes
- Engineering & Computer Science (AREA)
- Power Engineering (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A DBN-SVM-based power quality prediction method for a DG-containing active power distribution network comprises the following steps: preprocessing input and output data; pre-training each layer of RBM of the DBN model; fine-tuning the DBN model; performing feature extraction on environmental indexes affecting the quality of the electric energy based on the DBN; predicting the steady-state index of the electric energy quality based on a top support vector regression machine; and (5) performing inverse normalization on the prediction result, and performing error analysis. The invention has the advantages that: 1. the DBN-SVM model is utilized to predict the electric energy quality of the DG-containing power distribution network with good effect, the probability that the error rate falls in a low-value interval can be increased on the basis of the traditional SVM, and the prediction precision is improved; 2. obtaining a complete DBN bottom layer structure through unsupervised layer-by-layer training and supervised integral weight fine tuning, and providing a feature matrix for a top-layer SVM; 3. and the top-layer SVM realizes effective prediction on the quality of the electric energy after optimizing parameters by a grid method.
Description
Technical Field
The invention relates to a DBN-SVM-based active power distribution network electric energy quality prediction method, and belongs to the field of electrical engineering and electric energy quality.
Background
An active power distribution network refers to a power distribution system which manages power flow by using a network with a flexible topological structure and can actively manage and control distributed energy equipment. Compared with the traditional power distribution network, the active power distribution network has certain distributed controllable resources, has a perfect controllable level, has a control center for realizing coordination optimization management, and has a flexibly adjustable network topology structure. Compared with a micro-grid, the difference is that the micro-grid is used for grid connection under the conditions of a small grid and an island, and an active power distribution network is used for normal grid connection and the island state of a large grid. The control center of the active power distribution network can monitor a main network, the power distribution network, user side loads and distributed power supplies, and provides a corresponding optimization coordination control strategy, so that the power supply reliability and the power supply quality can be improved, the energy utilization efficiency can be improved, the power grid operation cost can be reduced, and system faults can be reduced.
Due to the high permeability of a Distributed Generation (DG) in an active power distribution network and the unstable characteristics of randomness, volatility, dispersibility, non-schedulability and the like, the problem of power quality such as voltage fluctuation, flicker, harmonic distortion, overvoltage and the like of each node of the power distribution network is easily caused. The method realizes the prediction of the power quality situation in the active power distribution network, provides a basis and a premise for realizing the control strategy of the power distribution network and the active control of the power quality, is favorable for improving the power quality, and has important significance for safe and economic operation of the power grid, improvement of the quality of industrial products, guarantee of normal operation of scientific experiments, reduction of energy consumption and the like.
Currently, the research results of predicting the power quality of an active power distribution network by using a Deep Belief Network (DBN) in combination with a Support Vector Machine (SVM) are few. Most studies apply deep belief networks to the field of load prediction: for example, patent with application number CN201810364330 proposes a DBN-based 24-hour power load prediction method; the patent with the application number of CN201811356210 provides a power system short-term load prediction method based on a similar daily method and a PSO-DBN. There have also been some studies to apply DBN to power usage prediction, power prediction: a deep learning-based power consumption prediction method for low-voltage users is proposed as in patent application No. CN 201710465683; patent with application number CN201810839539 provides a high-accuracy photovoltaic power generation amount prediction model and a construction method and application thereof. In the field of power quality prediction, a patent with the application number of 201310122440.X provides a wind power quality trend prediction method based on DBSCAN clustering and a Monte Carlo algorithm. The patent with the application number of 201810644850.3 provides a method for predicting the power quality of a power distribution network with a distributed power supply based on a support vector machine, which only uses the support vector machine for prediction and has poor capability of processing the problems of large data volume and complex model. The patent with the application number of 201811561343.X provides an electric energy quality prediction method for a power distribution network with a distributed power supply based on clustering and a neural network, but the K-means clustering method used by the method randomly determines an initial cluster center point, so that final cluster distribution is unfixed, the uniform distribution of clustering results can cause great influence on subsequent neural network prediction, and the requirement on a training data set is high.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a DBN-SVM-based active power distribution network power quality prediction method.
According to the method, a Deep Belief Network (DBN) is added to extract input characteristics on the basis of a traditional Support Vector Machine (SVM), firstly, a Restricted Boltzmann Machine (RBM) is subjected to layer-by-layer unsupervised training, then, supervised integral weight fine adjustment is carried out on the DBN consisting of the RBM, time, illumination intensity and temperature which affect DG output and load access conditions which directly affect the electric energy quality of an active power distribution network are comprehensively considered, the SVM is used as a top-level regression prediction model, parameter optimization is carried out by using a grid method, and the steady-state electric energy quality index of the active power distribution network is effectively predicted.
In order to achieve the above purpose, the present invention provides an active power distribution network power quality prediction method based on a DBN-SVM, that is, a Deep Belief Network (DBN) combined with a Support Vector Machine (SVM), as shown in fig. 1, the process includes the following steps:
1. preprocessing input and output data: in order to eliminate precision loss in the data processing process caused by non-uniform dimension, normalization operation is carried out on input and output data of the DBN-SVM model;
the input of the DBN-SVM model comprises m dimensions, n groups of data are in total, and the whole input matrix is marked as X, namely
Wherein x isnmThe input data of the nth group and the mth dimension are obtained by analogy;
the output of the DBN-SVM model comprises 1 dimension, n groups of data are shared, and the whole output matrix is marked as Y, namely
Wherein, ynThe data is output data of the nth group, and the rest can be done in the same way;
carrying out normalization operation on input data and output data according to the expressions (3) and (4):
wherein, Xp、Xp' p-th input variable arrays before and after normalization, Xp.minAnd Xp.maxAre each XpMinimum and maximum values of; y, Y' are output index data before and after normalization, YminAnd YmaxRespectively the minimum value and the maximum value of the elements in the output matrix Y;
2. pre-training each layer of RBM of the DBN model: the DBN shares a K-layer RBM model, a single-layer RBM network consists of a visible layer and a hidden layer, and v is ═ v { (v)1,v2,v3,...vIDenotes a visible layer, h ═ h }1,h2,…hJRepresenting a hidden layer, wherein I is the number of nodes of the visible layer, and J is the number of nodes of the hidden layer; each node in the RBM is a {0,1} binary value, when the node value is 1, the current node is in an open state, and when the node value is 0, the current node is in a closed state;
step 201, sorting the normalized data in step 1 according to a time sequence, dividing the last 24 groups of data in n groups of data as a test set, taking 70% of the rest n-24 groups of data as a training set, and taking the other 30% as a verification set;
step 202, initializing parameters of the single-layer RBM: inputting training samples as a visible layer of a first layer RBM, setting an RBM training period epoch, a learning rate tau, the number I of visible layer nodes and the number J of hidden layer nodes, and initializing an offset vector a ═ a1,a2,...,aI}、b={b1,b2,...,bJAnd weight matrix w:
step 203, training the RBM network of the layer: visible layer node viI ∈ {1, 2. ·, I } and hidden layer node hjThe joint energy of J ∈ {1, 2., J } may be represented by the energy function of equation (6):
wherein θ ═ { w ═ wij,ai,bj},wijIs a node viAnd node hjThe connection weight value between aiAs a visible layer node viOffset of (b)jFor hiding layer node hjThe offset of (3);
the joint probability distribution of the visible layer node and the hidden layer node is shown as the formula (7):
wherein, Z (θ) is a normalization factor, also called a distribution function, as shown in equation (8):
node v in RBM visible layeriOn the premise of knowing, hiding layer node hjEach node condition is independent, and the joint probability distribution between the visible layer and the hidden layer can be obtained by the following formula (9):
can be obtained from the formula (9)Hidden layer node hjThe probability of 1 is obtained, and the activation probability is shown as the following formula (10):
wherein exp () is an exponential function, Σi() Representing that all i are traversed to accumulate the terms in parentheses;
hiding layer node h in RBMjOn the premise of knowing, visible layer node viEach node condition is independent, and the joint probability distribution between the visible layer and the hidden layer can be obtained by the following formula (11):
from the formula (11), the visible layer node v can be obtainediThe probability of 1 is obtained, and the activation probability is shown as the formula (12):
the edge distribution of P (v, h | θ) to the hidden layer h is obtained according to equation (7), and the activation probability of the visible layer v can be obtained from equation (13):
the parameter value of RBM is taken as w when the activation probability P (v | theta) is maximumij、aiAnd hjThe maximum value of solution P (v | θ) may be transformed into a maximum value of solution likelihood function L (θ), which may be of the form of equation (14):
optimal parameter theta of RBM model*From formula (15):
wherein arg [ ] represents an inverse function;
the update of the RBM weights can be obtained by the contrast divergence algorithm of equation (16):
wherein the learning rate tau takes a value between 0 and 1;<>datathe expected output for the hidden layer given the sample data as it is input at the visible layer,<>reconestimating the expected output of the feature reconstruction by a contrast divergence algorithm;
step 204, taking the output of the RBM of the previous layer as the input of the RBM of the next layer, repeating the steps 202 to 203 until the energy function of the RBM of the next layer is converged, and obtaining the output of the next layer;
step 205, repeating step 204, and training the RBMs layer by layer until all RBMs in K layers are trained;
3. fine tuning the DBN model: the aim is to minimize the overall reconstruction error of the DBN model;
step 301, inputting the input data of the normalized training set into the pre-trained DBN model in step 2 to obtain an output intermediate vector G;
step 302, transmitting the output G as input to a top-layer SVM model, and outputting power quality prediction result data H;
step 303, comparing the H with the real power quality output data of the training set, transmitting an error back to the bottom DBN network, and finely adjusting the parameters of the whole DBN network until the error is within a set range;
step 304, in step 303, the fine tuning of the DBN network parameters still cannot reach the condition that the error is within the set range, but the fine tuning frequency reaches the set maximum value, and the tuning process is finished;
4. carrying out feature extraction on environmental indexes influencing the quality of electric energy based on the DBN: inputting the normalized training set and test set input data into the optimal DBN model obtained in the step 3, and respectively extracting characteristic matrixes input by the training set and test set samples;
5. predicting the steady-state index of the electric energy quality based on a top support vector regression:
step 501, performing nonlinear mapping on the feature matrix input by the training set sample extracted in step 4 to a high-dimensional feature space, and performing linear approximation in the space, where the form of the approximation function f (x') can be represented by formula (17):
f(x′)=ω·Φ(x′)+d (17)
in the formula, "·" is an inner product operator, ω is a weight vector adjustable in a high-dimensional space, x 'is input of a support vector machine, the dimension of the support vector machine is n', Φ (x ') is nonlinear mapping with the input of x', and d is a bias term;
the ω and d values are estimated by minimizing functional equation (18):
wherein Rreg [ f]For regularizing the risk functional, Remp [ f ]]For empirical risk, γ is a regularization constant, | | | | | represents the Euclidean norm, t is a variable that traverses 1 to n ', y'tThe t item output of the support vector machine is represented;
equation (18) is equivalent to solving the optimization problem shown in equation (19):
with the constraint of
In the formula, minT represents a minimization objective function T,is two of upper and lower in a hyperplaneThe same relaxation variable c is a normalization constant, and the larger the value of the normalization constant, the higher the fitting degree of the data is; the coefficient epsilon controls the size of a regression approximation error pipeline, the fitting precision of the training sample is determined, the larger the value of the coefficient epsilon is, the fewer the support vectors are, and the lower the precision is;
introducing a kernel function method to convert the formulas (19) and (20) into the forms of formulas (21) and (22):
with the constraint of
Where max T represents the maximization of the objective function T, s is defined for traversing the set [1, n']A variable of (d);ls,ltfor four Lagrange multipliers, i.e. to minimize Rreg f]The solution of (1); x'sSample input with subscript s is denoted, Kernel denotes Kernel function;
solving the quadratic programming shown in equations (21) and (22) to obtain a nonlinear mapping shown in equation (23):
wherein, Kernel (x't,x′u)=Φ(x′t) Φ (x') is a kernel function that satisfies the Mercer condition; an RBF kernel, namely a radial basis kernel, is selected, and the expression is shown as formula (24):
Kernel(x′t,x′u)=exp(-g||x′t-x′u||2),g>0 (24)
in the formula, the parameter g is a gamma function, i.e., a parameter setting of a gamma function, and u is 1,2, …, n';
step 502, a grid searching method is adopted, so that the parameters c and g are divided into grids within the range of [0.001,1000] by taking 10 times as step length, and a group of parameter combinations of c and g is selected;
step 503, traversing all grid values divided by c and g, constructing a PQ prediction model based on a support vector machine according to the regression process in step 501, predicting the output of a verification set, and selecting a group of corresponding c and g parameters which enable the prediction error to be minimum, namely the optimal parameter combination copt、gopt;
Step 504, calculating the prediction accuracy of the verification set, if the prediction accuracy cannot meet the requirement, re-dividing the training set and the verification set, and repeating the step 501, the step 502 and the step 503 to obtain a support vector machine model in an optimal division mode;
step 505, inputting the test set into the support vector machine model in the optimal division mode obtained in the step 504 to obtain a power quality index prediction result;
6. and (3) performing inverse normalization on the prediction result, and performing error analysis:
step 601, performing inverse normalization processing on the prediction result in step 505 according to equation (25):
Ypre=Ypre′×(Ymax-Ymin)+Ymin (25)
wherein, YpreAnd Ypre' Power quality prediction model output data after and before inverse normalization processing, YmaxAnd YminRespectively the maximum value and the minimum value of the elements in the output matrix Y;
step 602, recording the output of the test set as the actual output YrealThe relative error index RE of the prediction result is calculated according to equation (26):
step 603, calculating the root mean square error indicator RMAE of the predicted result according to equation (27):
wherein N ispreIs YpreNumber of elements in the dataset.
The invention has the following beneficial effects: 1. the DBN-SVM model is utilized to predict the electric energy quality of the DG-containing power distribution network with good effect, the probability that the error rate falls in a low-value interval can be increased on the basis of the traditional SVM, and the prediction precision is improved; 2. obtaining a complete DBN bottom layer structure through unsupervised layer-by-layer training and supervised integral weight fine tuning, and providing a feature matrix for a top-layer SVM; 3. and the top-layer SVM realizes effective prediction on the quality of the electric energy after optimizing parameters by a grid method.
Drawings
FIG. 1 is a general block diagram of the process of the present invention.
Fig. 2 shows a topology structure of a 13-node 10.5kV DG-containing active power distribution network of the present invention.
FIG. 3 is a comparison of the prediction results of voltage deviation between SVM and DBN-SVM models of the present invention.
FIG. 4 is a comparison of SVM and DBN-SVM model voltage deviation prediction relative errors of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples. The general block diagram of the method for predicting the electric energy quality of the active power distribution network including the DG in the embodiment is shown in the attached figure 1, and the method comprises the following steps:
1. preprocessing input and output data: in order to eliminate precision loss in the data processing process caused by non-uniform dimension, normalization operation is carried out on input and output data of the DBN-SVM model;
in the embodiment, a 13-node 10.5kV DG-containing active power distribution network shown in fig. 2 is established, and time, illumination intensity, temperature and load size are taken as input of the network, wherein the load size is divided into two inputs, one is the load size of a DG access node to be predicted, the other is the total load of other nodes, that is, the total load includes 5 dimensions, 504 groups of data are shared, and the whole input matrix is marked as X, as shown in formula (1);
taking a representative voltage deviation in the power quality indexes as an index to be predicted, namely the output of a network, wherein the output comprises 1 dimensionality and 504 groups of data in total, and the whole output matrix is marked as Y as shown in a formula (2);
carrying out normalization operation on input data and output data according to the formulas (3) and (4) respectively;
2. pre-training each layer of RBM of the DBN model: example setting DBN common K-2 layer RBM model, single layer RBM network composed of visible layer and hidden layer, v-v ═ v { (v)1,v2,v3,...vIDenotes a visible layer, h ═ h }1,h2,…hJThe node value of the RBM is 1, the current node is in an open state, and the node value of the RBM is 0, the current node is in a closed state;
step 201, sorting the normalized data in step 1 according to a time sequence, dividing the last 24 groups of data as a test set, using 70% of the remaining 480 groups of data as a training set, wherein 336 groups are total, and using another 30% as a verification set, and 144 groups are total;
step 202, initializing parameters of the single-layer RBM: training samples are used as visible layer input of a first layer RBM, an RBM training period epoch is set to be 500, a learning rate tau is set to be 0.00001, the number I of visible layer nodes is set to be 5, the number J of hidden layer nodes is set to be 200, and an initialization offset vector a is set to be { a ═ a1,a2,...,a5}、b={b1,b2,...,b200A weight matrix w shown in formula (5);
step 203, training the RBM network of the layer: visible layer node viI ∈ {1, 2., 5} and hidden layer node hjThe joint energy of j ∈ {1, 2., 200} can be represented by an energy function shown in equation (6);
calculating the joint probability distribution of the visible layer node and the hidden layer node by using a formula (7), wherein a normalization factor Z (theta) is shown as a formula (8);
if the visible layer node v is knowniThen layer node h is hiddenjThe joint probability distribution of (a) can be obtained by the formula (9), and the hidden layer node h is calculatedjTaking the probability of 1, namely the activation probability, as shown in the formula (10);
if h is knownjThen v isiEach node condition is independent, and the joint probability distribution can be obtained by the formula (11);
from the formula (11), the visible layer node v can be obtainediTaking the probability of 1, namely the activation probability, as shown in the formula (12);
solving P (v, h | theta) to hidden layer node h according to formula (7)jTo obtain a visible layer node viIs represented by formula (13);
the parameter value of RBM is taken as w when the activation probability P (v | theta) is maximumij、aiAnd hjThe maximum value of solving P (v | θ) may be converted into the maximum value of solving a likelihood function L (θ), which may be in the form of equation (14);
optimal parameter theta of RBM model*Obtainable from formula (15);
the weight update of the RBM can be obtained by a contrast divergence algorithm of an equation (16);
step 204, taking the output of the RBM of the previous layer as the input of the RBM of the next layer, repeating the steps 202 to 203 until the energy function of the RBM of the next layer is converged, and obtaining the output of the next layer;
step 205, repeating step 204, and training the RBM layer by layer until all K-2 layers of RBM are trained;
3. fine tuning the DBN model: minimizing the overall reconstruction error of the DBN model;
step 301, inputting the input data of the normalized training set into the pre-trained DBN model in step 2 to obtain an output intermediate vector G;
step 302, transmitting the output G as input to a top-layer SVM model, and outputting power quality prediction result data H;
step 303, comparing the H with the real power quality output data of the training set, transmitting an error back to the bottom DBN network, and finely adjusting the parameters of the whole DBN network until the error is within a set range;
step 304, in step 303, the fine tuning of the DBN network parameters still cannot reach the condition that the error is within the set range, but the fine tuning frequency reaches the set maximum value, and the tuning process is finished;
4. carrying out feature extraction on environmental indexes influencing the quality of electric energy based on the DBN: inputting the normalized training set and test set input data into the optimal DBN model obtained in the step 3, and respectively extracting characteristic matrixes input by the training set and test set samples;
5. predicting the steady-state index of the electric energy quality based on a top support vector regression:
step 501, performing nonlinear mapping on the feature matrix input by the training set sample extracted in step 4 to a high-dimensional feature space, and performing linear approximation in the space, wherein the form of the approximation function f (x') can be represented by formula (17); wherein ω and d are estimated by minimizing functional formula (18), and formula (18) is equivalent to solving the optimization problem of formula (19), and the constraint condition is shown in formula (20);
introducing a Kernel function method to convert the expressions (19) and (20) into the expressions (21) and (22) under the constraint condition shown in the expression (22), and solving quadratic programming shown in the expressions (21) and (22) to obtain the expression (23), wherein Kernel (x't,x′u)=Φ(x′t) Selecting an RBF kernel function, namely a radial basis kernel function, from phi (x'), wherein the expression is shown as a formula (24);
since the values of ω and Φ (x ') need not be calculated when calculating f (x'), only Lagrange multipliers need to be calculatedltAnd Kernel function Kernel (x't,x′k) The dimension disaster problem can be solved; the optimization of parameters required by the establishment of the SVM regression model is characterized in that parameters c and g are selected;
step 502, a grid searching method is adopted, so that the parameters c and g are divided into grids within the range of [0.001,1000] by taking 10 times as step length, and a group of parameter combinations of c and g is selected;
step 503, traverseAll grid values divided by c and g are taken, a PQ prediction model based on a support vector machine is constructed according to the regression process in the step 501, the output of a verification set is predicted, a group of corresponding c and g parameters which enable the prediction error to be minimum is selected, and the parameters are the optimal parameter combination copt、gopt;
Step 504, calculating the prediction accuracy of the verification set, if the prediction accuracy cannot meet the requirement, re-dividing the training set and the verification set, and repeating the step 501, the step 502 and the step 503 to obtain a support vector machine model in an optimal division mode;
step 505, inputting the test set into the support vector machine model in the optimal division mode obtained in the step 504 to obtain a power quality index prediction result;
6. the prediction result is subjected to inverse normalization and error analysis
Step 601, performing inverse normalization on the prediction result in the step 505 according to the formula (25);
step 602, recording the output of the test set as the actual output YrealCalculating a relative error index RE of the prediction result according to the formula (26);
in the example, the results of the calculation of the predicted voltage deviation and the error at each time point are shown in table 1:
TABLE 1 predicted values and errors of voltage deviations for DBN-SVM models
Step 603, calculating root mean square error index RMAE of the prediction result according to formula (27), wherein the calculation result is 0.00068898;
in the embodiment, the DBN-SVM and the common SVM are compared, and the 24-hour voltage deviation prediction result and the relative error of the test set are shown in the attached figures 3 and 4; the frequency ratio of the prediction errors of the voltage deviation of the SVM model and the DBN-SVM model in different ranges is shown in Table 2;
TABLE 2 comparison of frequency values of prediction errors of SVM and DBN-SVM voltage deviation in different ranges
The analysis of the embodiment shows that the frequency of the DBN-SVM prediction model in the range of the relative error [0,1) is obviously more than that of the SVM prediction model, so that the DBN-SVM prediction model can predict the electric energy quality of the active power distribution network containing DGs with good effect, the probability that the error rate falls in a low-value interval can be increased on the basis of the traditional SVM prediction model, and the prediction precision is improved.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.
Claims (1)
1. A DBN-SVM based active power distribution network electric energy quality prediction method comprises the following steps:
step 1, preprocessing input and output data: in order to eliminate precision loss in the data processing process caused by non-uniform dimension, normalization operation is carried out on input and output data of the DBN-SVM model;
the input of the DBN-SVM model comprises m dimensions, n groups of data are in total, and the whole input matrix is marked as X, namely
Wherein x isnmThe input data of the nth group and the mth dimension are obtained by analogy;
the output of the DBN-SVM model comprises 1 dimension, n groups of data are shared, and the whole output matrix is marked as Y, namely
Wherein, ynThe data is output data of the nth group, and the rest can be done in the same way;
carrying out normalization operation on input data and output data according to the expressions (3) and (4):
wherein, Xp、Xp' p-th input variable arrays before and after normalization, Xp.minAnd Xp.maxAre each XpMinimum and maximum values among all elements contained; y, Y' are the output matrices before and after normalization, YminAnd YmaxRespectively the minimum value and the maximum value of the elements in the output matrix Y;
step 2, pre-training each layer of RBM of the DBN model: the DBN model has a common K layer RBM, a single layer RBM is composed of a visible layer and a hidden layer, and v is ═ v1,v2,v3,…vIDenotes a visible layer, h ═ h }1,h2,…hJRepresenting a hidden layer, wherein I is the number of nodes of the visible layer, and J is the number of nodes of the hidden layer; each node in the RBM is a {0,1} binary value, when the node value is 1, the current node is in an open state, and when the node value is 0, the current node is in a closed state;
step 201, sorting the normalized data in step 1 according to a time sequence, dividing the last 24 groups of data in n groups of data as a test set, taking 70% of the rest n-24 groups of data as a training set, and taking the other 30% as a verification set;
step 202, initializing parameters of the single-layer RBM: the training set is used as the visible layer input of the first layer RBM, and the RBM training period epoch and the learning rate are setTau, the number of visible layer nodes I and the number of hidden layer nodes J, and an initialization offset vector a ═ a1,a2,…,aI}、b={b1,b2,…,bJAnd weight matrix w:
step 203, training the layer of RBM: visible layer node viI e {1,2, …, I } and hidden layer node hjThe joint energy of J ∈ {1,2, …, J } can be represented by the energy function of equation (6):
wherein θ ═ { w ═ wij,ai,bj},wijIs a node viAnd node hjThe connection weight value between aiAs a visible layer node viOffset of (b)jFor the hidden layer node hjThe offset of (3);
the joint probability distribution of the visible layer node and the hidden layer node is shown as the formula (7):
wherein, Z (θ) is a normalization factor, also called a distribution function, as shown in equation (8):
node v in RBM visible layeriOn the premise of knowing, the node h of the hidden layerjEach node condition is independent, and the joint probability distribution between the visible layer and the hidden layer can be obtained by the following formula (9):
the hidden layer node h can be obtained from the formula (9)jThe probability of 1 is obtained, and the activation probability is shown as the following formula (10):
wherein exp () is an exponential function, Σi() Representing that all i are traversed to accumulate the terms in parentheses;
in RBM hidden layer node hjOn the premise of knowing, visible layer node viEach node condition is independent, and the joint probability distribution between the visible layer and the hidden layer can be obtained by the following formula (11):
from the formula (11), the visible layer node v can be obtainediThe probability of 1 is obtained, and the activation probability is shown as the formula (12):
the edge distribution of P (v, h | θ) to the hidden layer h is obtained according to equation (7), and the activation probability of the visible layer v can be obtained from equation (13):
the parameter value of RBM is taken as w when the activation probability P (v | theta) is maximumij、aiAnd hjThe maximum value of solution P (v | θ) may be transformed into a maximum value of solution likelihood function L (θ), which may be of the form of equation (14):
optimum parameter theta of RBM*From formula (15):
wherein arg [ ] represents an inverse function;
the update of the RBM weights can be obtained by the contrast divergence algorithm of equation (16):
wherein the learning rate tau takes a value between 0 and 1;<>datagiven the expected output of the hidden layer for sample data at the input of the visible layer,<>reconestimating the expected output of the feature reconstruction by a contrast divergence algorithm;
step 204, taking the output of the RBM of the previous layer as the input of the RBM of the next layer, repeating the steps 202 to 203 until the energy function of the RBM of the next layer is converged, and obtaining the output of the next layer;
step 205, repeating step 204, and training the RBMs layer by layer until all RBMs in K layers are trained;
step 3, fine adjustment is carried out on the DBN model: the aim is to minimize the overall reconstruction error of the DBN model;
step 301, inputting the input data of the normalized training set into the pre-trained DBN model in step 2 to obtain an output intermediate vector G;
step 302, transmitting the output G as input to a top-layer SVM model, and outputting power quality prediction result data H;
step 303, comparing the H with the real power quality output data of the training set, transmitting an error back to the bottom DBN model, and finely adjusting the parameters of the whole DBN model until the error is within a set range;
step 304, in step 303, the parameters of the DBN model cannot be finely adjusted within the error range, but the number of times of fine adjustment reaches the set maximum value, and the adjustment process is ended;
and 4, carrying out feature extraction on environmental indexes influencing the power quality based on the DBN model: inputting the normalized training set and test set input data into the optimal DBN model obtained in the step 3, and respectively extracting characteristic matrixes input by the training set and test set samples;
step 5, predicting the steady-state index of the electric energy quality based on the top-layer SVM model:
step 501, performing nonlinear mapping on the feature matrix input by the training set sample extracted in step 4 to a high-dimensional feature space, and performing linear approximation in the space, wherein the form of an approximation function f (x') can be represented by formula (17):
f(x′)=ω·Φ(x′)+d (17)
in the formula, "·" is an inner product operator, ω is a weight vector adjustable in a high-dimensional feature space, x 'is input of an SVM model, the dimension of the SVM model is n', Φ (x ') is nonlinear mapping with the input of x', and d is a bias term;
the ω and d values are estimated by minimizing functional equation (18):
wherein Rreg [ f]For regularizing the risk functional, Remp [ f ]]For empirical risk, γ is a regularization constant, | | | | | represents the Euclidean norm, t is a variable that traverses 1 to n ', y'tRepresenting the t term output of the SVM model;
equation (18) is equivalent to solving the optimization problem shown in equation (19):
with the constraint of
In the formula, minT represents a minimization objective function T,ξtc is a normalization constant, and the larger the value of c is, the higher the data fitting degree is; the coefficient epsilon controls the size of a regression approximation error pipeline, the fitting precision of the regression approximation error pipeline to a training set is determined, the larger the value of the coefficient epsilon is, the fewer support vectors are, and the lower the precision is;
introducing a kernel function method to convert the formulas (19) and (20) into the forms of formulas (21) and (22):
with the constraint of
In the formula, maxT represents a maximized objective function T, and s is defined as a variable for traversing 1 to n';ls,ltfor four Lagrange multipliers, i.e. to minimize Rreg f]The solution of (1); x'sRepresenting a sample input with index s, Kernel () representing a Kernel function;
solving the quadratic programming shown in equations (21) and (22) to obtain a nonlinear mapping shown in equation (23):
wherein, Kernel (x't,x′u)=Φ(x′t) Φ (x') is a kernel function that satisfies the Mercer condition; an RBF kernel, namely a radial basis kernel, is selected, and the expression is shown as formula (24):
Kernel(x′t,x′u)=exp(-g||x′t-x′u||2),g>0 (24)
in the formula, the parameter g is a gamma function, i.e., a parameter setting of a gamma function, and u is 1,2, …, n';
step 502, a grid searching method is adopted, so that the parameters c and g are divided into grids within the range of [0.001,1000] by taking 10 times as step length, and a group of parameter combinations of c and g is selected;
step 503, traversing all the grid values of c and g division, constructing an electric energy quality prediction model based on an SVM model according to the regression process of step 501, predicting the output of a verification set, and selecting a group of corresponding c and g parameters which enable the prediction error to be minimum, namely the optimal parameter combination copt、gopt;
Step 504, calculating the prediction accuracy of the verification set, if the prediction accuracy cannot meet the requirement, re-dividing the training set and the verification set, and repeating the step 501, the step 502 and the step 503 to obtain an SVM model in an optimal division mode;
step 505, inputting the test set into the SVM model under the optimal division mode obtained in the step 504 to obtain a power quality index prediction result;
and 6, performing inverse normalization on the prediction result, and performing error analysis:
step 601, performing inverse normalization processing on the prediction result in step 505 according to equation (25):
Ypre=Ypre′×(Ymax-Ymin)+Ymin (25)
wherein, YpreAnd Ypre' Power quality prediction model output data after and before inverse normalization processing, YmaxAnd YminRespectively the maximum value and the minimum value of the elements in the output matrix Y;
step 602, recording the output of the test set as the actual output YrealCalculating by equation (26)Relative error index RE of the measurement results:
step 603, calculating the root mean square error indicator RMAE of the predicted result according to equation (27):
wherein N ispreIs YpreNumber of elements in the dataset.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911023579.2A CN110808581B (en) | 2019-10-25 | 2019-10-25 | Active power distribution network power quality prediction method based on DBN-SVM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911023579.2A CN110808581B (en) | 2019-10-25 | 2019-10-25 | Active power distribution network power quality prediction method based on DBN-SVM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110808581A CN110808581A (en) | 2020-02-18 |
CN110808581B true CN110808581B (en) | 2021-02-02 |
Family
ID=69489135
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911023579.2A Active CN110808581B (en) | 2019-10-25 | 2019-10-25 | Active power distribution network power quality prediction method based on DBN-SVM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110808581B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465358A (en) * | 2020-11-30 | 2021-03-09 | 云南电网有限责任公司电力科学研究院 | Voltage quality classification method and device based on support vector machine |
WO2023137631A1 (en) * | 2022-01-19 | 2023-07-27 | Siemens Aktiengesellschaft | Method and apparatus for data driven control optimization for production quality improvement |
CN116432991A (en) * | 2023-06-14 | 2023-07-14 | 国网浙江省电力有限公司嘉兴供电公司 | Park multi-energy supply and demand matching degree quantitative evaluation method considering space-time characteristics |
CN117117923B (en) * | 2023-10-19 | 2024-04-05 | 深圳市百酷新能源有限公司 | Big data-based energy storage control grid-connected management method and system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140278130A1 (en) * | 2013-03-14 | 2014-09-18 | William Michael Bowles | Method of predicting toxicity for chemical compounds |
CN106202946A (en) * | 2016-07-18 | 2016-12-07 | 燕山大学 | Clinker free calcium levels Forecasting Methodology based on degree of depth belief network model |
CN108876039B (en) * | 2018-06-21 | 2021-07-27 | 浙江工业大学 | Electric energy quality prediction method for power distribution network with distributed power supply based on support vector machine |
CN109057776A (en) * | 2018-07-03 | 2018-12-21 | 东北大学 | A kind of oil well fault diagnostic method based on improvement fish-swarm algorithm |
-
2019
- 2019-10-25 CN CN201911023579.2A patent/CN110808581B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110808581A (en) | 2020-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110808581B (en) | Active power distribution network power quality prediction method based on DBN-SVM | |
CN108876039B (en) | Electric energy quality prediction method for power distribution network with distributed power supply based on support vector machine | |
US11409347B2 (en) | Method, system and storage medium for predicting power load probability density based on deep learning | |
CN110059878B (en) | Photovoltaic power generation power prediction model based on CNN LSTM and construction method thereof | |
CN110175386B (en) | Method for predicting temperature of electrical equipment of transformer substation | |
Wang et al. | Toward the prediction level of situation awareness for electric power systems using CNN-LSTM network | |
Li et al. | An intelligent transient stability assessment framework with continual learning ability | |
US11042802B2 (en) | System and method for hierarchically building predictive analytic models on a dataset | |
Chuang et al. | Hybrid robust support vector machines for regression with outliers | |
Qiao et al. | Adaptive Levenberg-Marquardt algorithm based echo state network for chaotic time series prediction | |
CN107506590A (en) | A kind of angiocardiopathy forecast model based on improvement depth belief network | |
CN110738344A (en) | Distributed reactive power optimization method and device for load prediction of power system | |
CN111200141B (en) | Proton exchange membrane fuel cell performance prediction and optimization method based on deep belief network | |
Ghosh et al. | Application of earthworm optimization algorithm for solution of optimal power flow | |
CN112861992A (en) | Wind power plant ultra-short term power prediction method based on independent sparse stacking self-encoder | |
CN114881106A (en) | Transformer fault diagnosis method and device based on MPA-SVM | |
CN112508734A (en) | Method and device for predicting power generation capacity of power enterprise based on convolutional neural network | |
Wang et al. | Transient stability preventive control based on graph convolution neural network and transfer deep reinforcement learning | |
CN117994584A (en) | Extreme learning machine image classification method based on OLQR algorithm training | |
Xu et al. | Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network | |
CN116663745A (en) | LSTM drainage basin water flow prediction method based on PCA_DWT | |
CN114692513B (en) | New energy bearing capacity assessment method and early warning method based on deep learning | |
CN114362151A (en) | Trend convergence adjusting method based on deep reinforcement learning and cascade graph neural network | |
Obert et al. | Efficient distributed energy resource voltage control using ensemble deep reinforcement learning | |
Fang et al. | A Two-Stage Deep Learning Approach for Solving Microgrid Economic Dispatch |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |