Disclosure of Invention
The invention mainly aims to provide a load flow calculation method based on a load flow embedding technology, which is simple and convenient, has higher calculation speed, can be used for on-line load flow calculation, has no convergence problem, and can calculate the load flow value of any topological structure power grid in N nodes.
In order to achieve the purpose, the invention adopts the technical scheme that:
the invention discloses a load flow calculation method based on a load flow embedding technology, which comprises the following steps:
1) determining the maximum node number N and the maximum PV node number K, and constructing a training set K, a verification set V and a test set T;
2) constructing corresponding positive samples K for the training data in the step 1)+And negative sample K-;
3) Based on the training sample K and the positive sample K in the step 2)+Negative sample K-Using a triplet-based twin neural network, and obtaining a tide embedded layer P after full training;
4) taking the trend embedded layer in the step 3) as a first hidden layer of the deep neural network, training the deep neural network on the basis, and reserving the trained parameters of the deep neural network;
5) and (4) for any power grid needing load flow calculation, using the deep neural network trained in the step 4) to obtain a corresponding load flow solution through forward calculation.
Preferably, the step 1) further comprises:
1-1) determining the size of the data set, for each data, performing the following steps:
firstly, determining the node number n and the PV node number k in a random number form, and randomly numbering each node;
generating an n-node connected graph according to graph theory and a depth-first search algorithm, randomly generating the impedance of each connected edge, and obtaining and storing an admittance matrix;
defining the node 1 as a balance node, the nodes 2 to (k-1) as PV nodes, and the rest nodes are not PQ nodes, and randomly generating the voltage amplitude v and the phase angle theta of each node;
calculating the active power p and the reactive power q of each node according to a power flow equation, wherein the power flow equation is described as follows:
wherein p isi、qiRespectively representing active and reactive power, v, of node iiRepresenting the voltage magnitude, G, of node iij、BijRespectively represent the ith row in the node admittance matrixReal and imaginary parts of the elements of column j, θij=θi-θjThe voltage phase difference between the node i and the node j is shown, and the symbol j epsilon i means that the node i is connected with the node j, including the case that i is j;
using the active power p and the reactive power q of a PQ node, the active power p and the voltage amplitude v of a PV node, the voltage amplitude v and the voltage phase angle theta of a balance node, and the upper triangular matrix G of the real part and the upper triangular matrix B of the imaginary part of an admittance matrix as an input part in a data set, and using the input part as a certain row in the input matrix;
sixthly, taking the voltage amplitude v and the voltage phase angle theta of all nodes as a certain row of the label matrix;
1-2) dividing the generated data set into a training set K, a verification set V and a test set T.
Preferably, the step 2) further comprises:
2-1) for data (x) in each training seti,ti) The following steps are executed:
first pair t
iApplying a small perturbation to obtain
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
And reactive power
Die pair of
iApplying a large perturbation
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
And reactive power
Outputting corresponding positive samples
And a negative sample
2-2) mixing
As a certain row of the input matrix and the label matrix in the positive sample set,
as a certain row of the input matrix and the label matrix in the negative sample set, outputting a positive sample set K + and a negative sample set K of the training set
-。
Preferably, the step 3) further comprises:
3-1) based on the twin neural network model of the triplet loss function as shown in figure I, x in the training set K
iIn the positive sample set K +
Negative sample set K-in
As an input of the twin neural network, the model of the twin neural network based on the triplet loss function as shown in fig. 1 can be described as follows:
y1=Wxi+b,
d1=||y1-y2||,
d2=||y1-y3||,
wherein the embedding layer coefficient P consists of the final (W, b);
3-2) outputting an embedding layer parameter P after fully training by adopting an Adam algorithm, wherein the rule for updating the parameter x by adopting the Adam algorithm is as follows:
mn=β1·mn-1+(1-β1)·gn,
wherein the subscript n indicates that g is being performed in the nth iteration
nDenotes f (x) at x
nGradient of (d), m
nAnd
first moment estimate and modified first moment estimate, v, respectively representing gradient
nAnd
representing the second moment estimate of the gradient and the modified second moment estimate, respectively.
Preferably, the step 4) further comprises:
4-1) selecting a deep neural network, and converting x in a training set sample KiAs its input, t in KiAs a tag, initializing parameters of a first hidden layer of the network to be embedded layer parameters P, selecting an initialization mode for parameters of other layers according to the selected deep neural network, and describing a forward calculation process of the network as:
oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi),
4-2) adopting Adam optimization method to minimize Loss function, outputting after full training and reserving parameter (W) of each layer1,b1,W2,b2,…,Wn,bn)。
Preferably, the step 5) further comprises:
5-1) for any power grid, the active power and the reactive power of a PQ node, the active power and the voltage amplitude of a PV node, the voltage amplitude and the voltage phase angle of a balance node, and the input x of a neural network on the real part and the imaginary part of an admittance matrixiBring into oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi) In, output oiI.e. voltage magnitude and voltage angle for all nodes.
Compared with the prior art, the invention has the following beneficial effects:
the calculation method is a direct method, when in final use, the known parameters are only required to be arranged according to rules and then used as the input of a deep neural network, and the final load flow value can be obtained through multiplication of a plurality of matrixes and nonlinear operation of neurons, so that the method is simple, the calculation speed is high, the method can be used for on-line load flow calculation, and the problem of convergence does not exist; the calculation method can be used for not only the power grid with the fixed topological structure, but also solving the load flow of the variable topological power grid with any node number not more than N.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
In the following, taking N as an example 14, and combining with the relevant drawings in the embodiment of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described, and it is obvious that the described embodiment is only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a load flow calculation method of an N-node internal variable topology power grid based on deep learning, which is simple and convenient, has high calculation speed, can be used for on-line load flow calculation, and does not have the problem of convergence. The method mainly comprises the following steps:
determining the maximum node number N to be 14 and the maximum PV node number K to be 1, constructing a training set, a verification set and a test set, and executing the following steps for each datum:
1-1) randomly numbering the nodes by taking a random integer between 3 and N (in this example, N is 14), taking the random integer as the number N of the grid nodes, taking a random integer between 0 and K (in this example, K is 1), taking the random integer as the number K of the grid PV nodes, and taking N as 5 and K as 1 as an example;
1-2) generating an n-node connected graph according to graph theory and a depth-first search algorithm, randomly generating impedance of each connected edge, and storing the impedance in the form of an admittance matrix, taking the power grid in the step 1-1) as an example, a topological structure of the network can be obtained according to the algorithm shown in the attached figure 3, and the admittance matrix of the network can be expressed as:
wherein Y is
ij=Y
ji,
The final admittance matrix can be expressed as:
1-3) defining a node 1 as a balanced node, defining nodes 2 to (k +1) as PV nodes, generating a voltage amplitude v and a phase angle theta of each node by using the remaining nodes as PQ nodes, taking the power grid in the steps 1-1) to 1-2) as an example, defining the node 1 of the network as the balanced node, the node 2 as the PV node, and the nodes 3 to 5 as the PQ nodes, and randomly generating the voltage amplitude v and the phase angle theta of each node, wherein the values of the v and the theta are as follows:
v=[v1v2v3v4v5],
θ=[θ1θ2θ3θ4θ5],
1-4) calculating the active power p and the reactive power q of each node according to a power flow equation, wherein the power flow equation is described as follows:
taking the power grid in the steps 1-1) -1-3) as an example, substituting v and theta of each node in the step 1-3) into a power flow equation to obtain values of active power p and reactive power q as follows:
p=[p1p2p3p4p5],
q=[q1q2q3q4q5],
1-5) taking the active power p and the reactive power q of a PQ node, the active power p and the voltage amplitude v of a PV node, the voltage amplitude v and the voltage phase angle theta of a balance node, and a real upper triangular matrix G and an imaginary upper triangular matrix B of an admittance matrix as input parts in a data set, and taking the input parts as a certain row in an input matrix. Two points need to be noted:
the admittance matrix is symmetrical, and the elements on the diagonal can be obtained from other elements of the row or the column, so that only the upper triangular element of the admittance matrix is needed;
because the adopted deep learning model is a fully-connected model, the input dimensions are required to be consistent, training data can include power grids with different node numbers and different topological structures, the dimension of each data is inconsistent due to different numbers of power grid nodes and different numbers of PV nodes, for the power grids with different node numbers, the unknown power flow state is set to be zero, for the power grids with different PV node numbers, the power grids with different PV node numbers are processed by K triples, each triplet includes three quantities (p, q, v), for a power grid with K PV nodes, K triples are taken to be (p, 0, v), and the rest are taken to be (p, q, 0), taking the power grid in steps 1-1) -1-4) as an example, a node 2 is a PV node, and the row of the input matrix can be represented as:
Input_14=[v1θ1p20v2p3p4p5000000000q3q4q5000000000
G12~G1(14)G23~G2(14)G34~G3(14)G45~G4(14)G56~G5(14)G67~G6(14)G78~G7(14)G89~G8(14)G9(10)~G9(14)G(10)(11)~G(10)(14)G(11)(12)~G(11)(14)G(12)(13)~G(12)(14)G(13)(14)~G(13)(14)B12~B1(14)B23~B2(14)B34~B3(14)B45~B4(14)B56~B5(14)B67~B6(14)B78~B7(14)B89~B8(14)B9(10)~B9(14)B(10)(11)~B(10)(14)B(11)(12)~B(11)(14)B(12)(13)~B(12)(14)B(13)(14)~B(13)(14)」
when k is 0 in step 1-1), i.e., when node 2 is PQ node, the row of the input matrix is tabulated
Shown as follows:
Input_14=[v1θ1p2q20p3p4p5000000000q3q4q5000000000
G12~G1(14)G23~G2(14)G34~G3(14)G45~G4(14)G56~G5(14)G67~G6(14)G78~G7(14)G89~G8(14)G9(10)~G9(14)G(10)(11)~G(10)(14)G(11)(12)~G(11)(14)G(12)(13)~G(12)(14)G(13)(14)~G(13)(14)B12~B1(14)B23~B2(14)B34~B3(14)B45~B4(14)B56~B5(14)B67~B6(14)B78~B7(14)B89~B8(14)B9(10)~B9(14)B(10)(11)~B(10)(14)B(11)(12)~B(11)(14)B(12)(13)~B(12)(14)B(13)(14)~B(13)(14)],
1-6) takes the voltage magnitude v and voltage phase angle theta of all nodes as a certain row of the tag matrix. Label (R)
This row of the matrix can be represented as:
Target_14=[v1 v2 v3 v4 v5 v6 v7 v8 v9 v(10) v(11) v(12) v(13) v(14) θ1 θ2 θ3 θ4θ5 θ6 θ7 θ8 θ9 θ(10) θ(11) θ(12) θ(13) θ(14)],
taking the power grid in steps 1-1) -1-4) as an example, the row of the tag matrix can be represented as:
Target_5=[v1 v2 v3 v4 v5 0 0 0 0 0 0 0 0 0 θ1 θ2 θ3 θ4 θ5 0 0 0 0 0 0 0 0 0],
and dividing the generated data set into a training set, a verification set and a test set. In this embodiment, the training set, the verification set, and the test set are 400000, 60000, and 40000, respectively;
for each training data (x) in step 2)i,ti) The following steps are executed:
3-1) to t
iApplying a small perturbation
To obtain
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
And reactive power
Obtained after small perturbations
Active power
And reactive power
Can be expressed as:
3-2) to t
iApplying a large perturbation
To obtain
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
And reactive power
Obtained after large disturbances
Active power
And reactive power
Can be expressed as:
3-3) mixing
As a certain row of the input matrix and the label matrix in the positive sample set respectively,
respectively used as a negative sample set input matrix and a certain row of a label matrix;
the triple loss function based twin neural network model is obtained after full training by taking Tensorflow as a platform, python as a programming language and a random gradient algorithm as an optimization method, and can be described as follows, as shown in the attached figure 1:
y1=Wxi+b,
d1=||y1-y2||,
d2=||y1-y3||,
wherein the embedding layer coefficient P consists of the final (W, b);
taking the power flow embedding layer in the step 2) as a first hidden layer of the deep neural network, and training the deep neural network on the basis of the first hidden layer to calculate a variable topology power grid with the number of nodes not more than N. Taking the residual error network as an example, the final residual error network model structure with an embedded layer is shown in fig. 5;
and (5) for any power grid needing load flow calculation, obtaining a corresponding load flow solution by using the deep neural network trained in the step 5). Taking the five-node power grid as shown in fig. 4 as an example, the following steps are performed:
6-1) sampling the network to obtain related parameters, wherein the node parameters of the five-node power grid are shown in table 1, and the line parameters are shown in table 2;
table 1: five-node grid node parameters
Table 2: five-node power grid line parameters
6-2) obtaining the real part G and the imaginary part B of the admittance matrix Y and the Input of the deep neural network according to the table 1-2:
Input=[1.0501.050523.71.60000000001.01.30.8000000000
0000000000000000000000000-0.8299-0.6240000000000-0.7547000000000000000000000000000000000000000000000000000000
0033.3333000000000066.6667000000000003.11203.90020000000002.6415000000000000000000000000000000000000000000000000000000],
6-3) taking the input as the input of the deep neural network, and carrying out forward calculation to obtain corresponding output.
The feasibility of the invention is illustrated by the data below. Table 3 shows the errors on the training set and on the test set for several stacked self-encoders without embedded layers, taking N50 and K3.
Table 3: load flow calculation result based on stacked self-encoder
According to table 3, it can be found that networks with a large number of hidden layer neurons and a large number of hidden layers tend to have better training and testing results. For example, by comparing the network structures of 2554-; by comparing the stacked self-coding network with the network structure 2554-.
Table 4: stack-type self-encoder training and test time
Table 4 shows training and testing times of two different structures of the triple-hidden-layer stacked self-encoder, where the testing time is the time required for calculating a load flow value of the fourteen-node power grid shown in fig. 2, such as the fourteen-node power grid shown in fig. 2, a power grid node parameter is shown in table 5, and a power grid line parameter is shown in table 6;
table 5: fourteen-node power grid node parameter
Table 6: fourteen-node power grid line parameter
For a fourteen-node grid as shown in fig. 2, the time required for newton-raphson method is about 0.113314s, and the results can be obtained faster using a stacked self-encoder according to the results shown in table 4. For a fourteen-node power grid as shown in fig. 2, when the resistance value is increased by 100 times, the jacobian matrix has singularities when a newton-raphson method is adopted, so that the error of the jacobian matrix oscillates up and down and cannot be converged. When the stack-type self-encoder with the network structure of 2554-2000-1000-500-100 in table 3 is used to calculate the current value of the network and the calculated result is re-introduced into the current equation, the maximum error of the active power is 0.03(KVA) and the maximum error of the reactive power is 0.02(KVA), which shows that the present invention can provide the reference value of the pathological current to a certain extent.
Through the embodiment, on one hand, the deep neural network is proved to have the capability of solving the problem of load flow calculation, and on the other hand, the advantages of the invention in the aspects of calculation rapidity and convergence are demonstrated through comparison with Newton's method. To further illustrate the capability of the embedding layer mentioned in the present invention, we will train 4 different deep neural networks with embedding layers, i.e. shallow BP neural network, deep BP neural network and deep ReLU neural network, and residual error network, taking N-14 and K-1 as examples, and apply them to the five-node power grid as shown in fig. 4.
Tables 7-10 show simulation results for shallow BP, deep BP, and deep ReLU neural networks with embedded layers, and residual error networks, respectively. The unit of the voltage amplitude and the voltage amplitude error in the table is p.u., and the unit of the voltage angle and the voltage angle error is degree.
Table 7: shallow BP neural network load flow calculation experiment result with embedded layer
Table 8: deep BP neural network load flow calculation experimental result with embedded layer
Table 9: deep ReLU neural network load flow calculation experimental result with embedded layer
Table 10: residual network load flow calculation experiment result with embedded layer
Table 7 shows the experimental results of applying a trained shallow BP neural network with an embedded layer to a five-node grid as shown in fig. 4. According to the voltage amplitude error and the voltage angle error, the shallow neural network can be considered to be used for load flow calculation of a power grid to a certain extent, but the accuracy is poor;
table 8 shows the experimental results of applying a trained deep BP neural network with embedded layers (ten layers) to a five-node grid as shown in fig. 4. By comparing the results between tables 7 and 8, the deep BP neural network is far less effective in simulation than the shallow BP neural network because the deep BP neural network has a problem of gradient disappearance. Meanwhile, the problem that the gradient disappears cannot be solved by the tide embedding technology is also explained;
table 9 shows the experimental results of applying a trained deep ReLU neural network with embedded layers (ten layers) to a five-node grid as shown in fig. 4. By comparing table 8 with table 9, it can be found that the error of the deep ReLU neural network is much smaller than that of the deep BP neural network with the same structure;
table 10 shows the experimental results of applying a trained deep residual network with embedded layers (ten layers) to a five-node grid as shown in fig. 4. By comparing tables 7 to 10, the residual network is best in performance.
Table 11: network simulation results without embedded layer
Table 11 shows the simulation results for each network without an embedded layer. By comparing tables 7 to 11, the performance of each network except the deep BP neural network is improved to a certain extent under the support of the tide embedding technology, and even though the simplest shallow BP neural network is used, a relatively good experimental result can be obtained, which shows that the tide embedding technology can dig the potential characteristics of the input data of the power grid to a certain extent, thereby further improving the expressive force of the model.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and improvements can be made without departing from the spirit of the present invention, and these modifications and improvements should also be considered as within the scope of the present invention.