CN112072634B - Load flow calculation method based on load flow embedding technology - Google Patents

Load flow calculation method based on load flow embedding technology Download PDF

Info

Publication number
CN112072634B
CN112072634B CN201910495388.XA CN201910495388A CN112072634B CN 112072634 B CN112072634 B CN 112072634B CN 201910495388 A CN201910495388 A CN 201910495388A CN 112072634 B CN112072634 B CN 112072634B
Authority
CN
China
Prior art keywords
neural network
power
training
node
load flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910495388.XA
Other languages
Chinese (zh)
Other versions
CN112072634A (en
Inventor
李艳君
叶倩莹
潘树文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Tianneng Power Energy Co Ltd
Hangzhou City University
Original Assignee
Zhejiang Tianneng Power Energy Co Ltd
Hangzhou City University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Tianneng Power Energy Co Ltd, Hangzhou City University filed Critical Zhejiang Tianneng Power Energy Co Ltd
Priority to CN201910495388.XA priority Critical patent/CN112072634B/en
Publication of CN112072634A publication Critical patent/CN112072634A/en
Application granted granted Critical
Publication of CN112072634B publication Critical patent/CN112072634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/04Circuit arrangements for ac mains or ac distribution networks for connecting networks of the same frequency but supplied from different sources
    • H02J3/06Controlling transfer of power between connected networks; Controlling sharing of load between connected networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a load flow calculation based on a load flow embedding technologyThe method comprises the following steps: determining the maximum node number N, and constructing a training set K, a verification set V and a test set T; constructing corresponding positive samples K for the training data in the step 1)+And negative sample K(ii) a Based on the training sample K and the positive sample K in the step 2)+Negative sample KAnd using a triplet-based twin neural network to obtain a tide embedded layer P after full training. The calculation method is a direct method, and when the method is finally used, only known parameters are required to be arranged according to rules and then serve as the input of a deep neural network, and the final load flow value can be obtained through multiplication of a plurality of matrixes and nonlinear operation of neurons.

Description

Load flow calculation method based on load flow embedding technology
Technical Field
The invention relates to the technical field of power systems, in particular to a load flow calculation method based on a load flow embedding technology.
Background
With the continuous development of new energy technology, a new energy utilization system of energy internet appears. The energy internet is used for integrating a series of power grid operation data to predict various conditions under the support of artificial intelligence technologies such as big data and machine learning, and finally all machines, equipment and systems can be dynamically adjusted in real time, so that the overall operation efficiency of the power grid is improved.
With the advent of the big data era and the improvement of computer technology, the performance of neural networks, especially deep learning, in the field of artificial intelligence greatly exceeds that of other machine learning models, and the neural networks are widely applied and have remarkable achievement in the fields of speech recognition, image classification, natural language processing and the like.
The deep learning technology is applied to load flow calculation, is a new exploration on the load flow calculation, aims to expand the application of deep learning in an electric power system, is very important analysis and calculation of the electric power system, is the basis of each calculation link in the electric power system when the load flow value of a power grid is accurately and quickly estimated, and is also the premise of analyzing the stability and reliability of the electric power system.
The deep learning technology is applied to the load flow calculation, which is a supplement to the traditional load flow calculation method, under the new situation of the energy Internet, the power grid structure related to the load flow calculation becomes more and more complex, the requirements on the rapidity and the convergence of the algorithm are more severe, the Gauss Seidel iteration method principle is simple, the occupied memory of a computer is less, but the situations of increased iteration times and non-convergence can occur when the method is applied to a large-scale power system; the Newton Raphson method has fast convergence and high precision, but the Jacobian matrix needs to be recalculated in each iteration process, thereby causing the problems of excessive memory occupation of a computer, too low calculation speed and the like; the fast decoupling method improves the computation speed and the memory usage, but may lead to non-convergence when certain pathological conditions occur.
The load flow calculation is essentially the solution of a set of nonlinear equations, and the deep learning is a tool with strong nonlinear fitting capability to a certain extent, so that the deep learning for solving the load flow has certain feasibility, and therefore, a load flow calculation method based on a load flow embedding technology is provided.
Disclosure of Invention
The invention mainly aims to provide a load flow calculation method based on a load flow embedding technology, which is simple and convenient, has higher calculation speed, can be used for on-line load flow calculation, has no convergence problem, and can calculate the load flow value of any topological structure power grid in N nodes.
In order to achieve the purpose, the invention adopts the technical scheme that:
the invention discloses a load flow calculation method based on a load flow embedding technology, which comprises the following steps:
1) determining the maximum node number N and the maximum PV node number K, and constructing a training set K, a verification set V and a test set T;
2) constructing corresponding positive samples K for the training data in the step 1)+And negative sample K-
3) Based on the training sample K and the positive sample K in the step 2)+Negative sample K-Using a triplet-based twin neural network, and obtaining a tide embedded layer P after full training;
4) taking the trend embedded layer in the step 3) as a first hidden layer of the deep neural network, training the deep neural network on the basis, and reserving the trained parameters of the deep neural network;
5) and (4) for any power grid needing load flow calculation, using the deep neural network trained in the step 4) to obtain a corresponding load flow solution through forward calculation.
Preferably, the step 1) further comprises:
1-1) determining the size of the data set, for each data, performing the following steps:
firstly, determining the node number n and the PV node number k in a random number form, and randomly numbering each node;
generating an n-node connected graph according to graph theory and a depth-first search algorithm, randomly generating the impedance of each connected edge, and obtaining and storing an admittance matrix;
defining the node 1 as a balance node, the nodes 2 to (k-1) as PV nodes, and the rest nodes are not PQ nodes, and randomly generating the voltage amplitude v and the phase angle theta of each node;
calculating the active power p and the reactive power q of each node according to a power flow equation, wherein the power flow equation is described as follows:
Figure GDA0002173915070000031
Figure GDA0002173915070000032
wherein p isi、qiRespectively representing active and reactive power, v, of node iiRepresenting the voltage magnitude, G, of node iij、BijRespectively represent the ith row in the node admittance matrixReal and imaginary parts of the elements of column j, θij=θijThe voltage phase difference between the node i and the node j is shown, and the symbol j epsilon i means that the node i is connected with the node j, including the case that i is j;
using the active power p and the reactive power q of a PQ node, the active power p and the voltage amplitude v of a PV node, the voltage amplitude v and the voltage phase angle theta of a balance node, and the upper triangular matrix G of the real part and the upper triangular matrix B of the imaginary part of an admittance matrix as an input part in a data set, and using the input part as a certain row in the input matrix;
sixthly, taking the voltage amplitude v and the voltage phase angle theta of all nodes as a certain row of the label matrix;
1-2) dividing the generated data set into a training set K, a verification set V and a test set T.
Preferably, the step 2) further comprises:
2-1) for data (x) in each training seti,ti) The following steps are executed:
first pair tiApplying a small perturbation to obtain
Figure GDA0002173915070000033
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
Figure GDA00021739150700000310
And reactive power
Figure GDA0002173915070000034
Die pair ofiApplying a large perturbation
Figure GDA0002173915070000035
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
Figure GDA0002173915070000039
And reactive power
Figure GDA0002173915070000036
Outputting corresponding positive samples
Figure GDA0002173915070000037
And a negative sample
Figure GDA0002173915070000038
2-2) mixing
Figure GDA0002173915070000041
As a certain row of the input matrix and the label matrix in the positive sample set,
Figure GDA0002173915070000042
Figure GDA0002173915070000043
as a certain row of the input matrix and the label matrix in the negative sample set, outputting a positive sample set K + and a negative sample set K of the training set-
Preferably, the step 3) further comprises:
3-1) based on the twin neural network model of the triplet loss function as shown in figure I, x in the training set KiIn the positive sample set K +
Figure GDA0002173915070000044
Negative sample set K-in
Figure GDA0002173915070000045
As an input of the twin neural network, the model of the twin neural network based on the triplet loss function as shown in fig. 1 can be described as follows:
y1=Wxi+b,
Figure GDA00021739150700000412
Figure GDA0002173915070000046
d1=||y1-y2||,
d2=||y1-y3||,
Figure GDA0002173915070000047
wherein the embedding layer coefficient P consists of the final (W, b);
3-2) outputting an embedding layer parameter P after fully training by adopting an Adam algorithm, wherein the rule for updating the parameter x by adopting the Adam algorithm is as follows:
Figure GDA0002173915070000048
mn=β1·mn-1+(1-β1)·gn
Figure GDA0002173915070000049
Figure GDA00021739150700000410
Figure GDA00021739150700000411
Figure GDA0002173915070000051
wherein the subscript n indicates that g is being performed in the nth iterationnDenotes f (x) at xnGradient of (d), mnAnd
Figure GDA0002173915070000052
first moment estimate and modified first moment estimate, v, respectively representing gradientnAnd
Figure GDA0002173915070000053
representing the second moment estimate of the gradient and the modified second moment estimate, respectively.
Preferably, the step 4) further comprises:
4-1) selecting a deep neural network, and converting x in a training set sample KiAs its input, t in KiAs a tag, initializing parameters of a first hidden layer of the network to be embedded layer parameters P, selecting an initialization mode for parameters of other layers according to the selected deep neural network, and describing a forward calculation process of the network as:
oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi),
Figure GDA0002173915070000054
4-2) adopting Adam optimization method to minimize Loss function, outputting after full training and reserving parameter (W) of each layer1,b1,W2,b2,…,Wn,bn)。
Preferably, the step 5) further comprises:
5-1) for any power grid, the active power and the reactive power of a PQ node, the active power and the voltage amplitude of a PV node, the voltage amplitude and the voltage phase angle of a balance node, and the input x of a neural network on the real part and the imaginary part of an admittance matrixiBring into oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi) In, output oiI.e. voltage magnitude and voltage angle for all nodes.
Compared with the prior art, the invention has the following beneficial effects:
the calculation method is a direct method, when in final use, the known parameters are only required to be arranged according to rules and then used as the input of a deep neural network, and the final load flow value can be obtained through multiplication of a plurality of matrixes and nonlinear operation of neurons, so that the method is simple, the calculation speed is high, the method can be used for on-line load flow calculation, and the problem of convergence does not exist; the calculation method can be used for not only the power grid with the fixed topological structure, but also solving the load flow of the variable topological power grid with any node number not more than N.
Drawings
FIG. 1 is a twin neural network model based on a triplet loss function in a power flow calculation method based on a power flow embedding technology.
Fig. 2 is a fourteen-node power grid involved in the load flow calculation method based on the load flow embedding technology.
Fig. 3 is a flow chart of generating a power grid topology in the power flow calculation method based on the power flow embedding technology.
Fig. 4 is a five-node power grid involved in the power flow calculation method based on the power flow embedding technology.
Fig. 5 is a residual error network model with embedded layers in a power flow calculation method based on the power flow embedding technology.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
In the following, taking N as an example 14, and combining with the relevant drawings in the embodiment of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described, and it is obvious that the described embodiment is only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a load flow calculation method of an N-node internal variable topology power grid based on deep learning, which is simple and convenient, has high calculation speed, can be used for on-line load flow calculation, and does not have the problem of convergence. The method mainly comprises the following steps:
determining the maximum node number N to be 14 and the maximum PV node number K to be 1, constructing a training set, a verification set and a test set, and executing the following steps for each datum:
1-1) randomly numbering the nodes by taking a random integer between 3 and N (in this example, N is 14), taking the random integer as the number N of the grid nodes, taking a random integer between 0 and K (in this example, K is 1), taking the random integer as the number K of the grid PV nodes, and taking N as 5 and K as 1 as an example;
1-2) generating an n-node connected graph according to graph theory and a depth-first search algorithm, randomly generating impedance of each connected edge, and storing the impedance in the form of an admittance matrix, taking the power grid in the step 1-1) as an example, a topological structure of the network can be obtained according to the algorithm shown in the attached figure 3, and the admittance matrix of the network can be expressed as:
Figure GDA0002173915070000071
wherein Y isij=Yji
Figure GDA0002173915070000072
The final admittance matrix can be expressed as:
Figure GDA0002173915070000073
1-3) defining a node 1 as a balanced node, defining nodes 2 to (k +1) as PV nodes, generating a voltage amplitude v and a phase angle theta of each node by using the remaining nodes as PQ nodes, taking the power grid in the steps 1-1) to 1-2) as an example, defining the node 1 of the network as the balanced node, the node 2 as the PV node, and the nodes 3 to 5 as the PQ nodes, and randomly generating the voltage amplitude v and the phase angle theta of each node, wherein the values of the v and the theta are as follows:
v=[v1v2v3v4v5],
θ=[θ1θ2θ3θ4θ5],
1-4) calculating the active power p and the reactive power q of each node according to a power flow equation, wherein the power flow equation is described as follows:
Figure GDA0002173915070000081
Figure GDA0002173915070000082
taking the power grid in the steps 1-1) -1-3) as an example, substituting v and theta of each node in the step 1-3) into a power flow equation to obtain values of active power p and reactive power q as follows:
p=[p1p2p3p4p5],
q=[q1q2q3q4q5],
1-5) taking the active power p and the reactive power q of a PQ node, the active power p and the voltage amplitude v of a PV node, the voltage amplitude v and the voltage phase angle theta of a balance node, and a real upper triangular matrix G and an imaginary upper triangular matrix B of an admittance matrix as input parts in a data set, and taking the input parts as a certain row in an input matrix. Two points need to be noted:
the admittance matrix is symmetrical, and the elements on the diagonal can be obtained from other elements of the row or the column, so that only the upper triangular element of the admittance matrix is needed;
because the adopted deep learning model is a fully-connected model, the input dimensions are required to be consistent, training data can include power grids with different node numbers and different topological structures, the dimension of each data is inconsistent due to different numbers of power grid nodes and different numbers of PV nodes, for the power grids with different node numbers, the unknown power flow state is set to be zero, for the power grids with different PV node numbers, the power grids with different PV node numbers are processed by K triples, each triplet includes three quantities (p, q, v), for a power grid with K PV nodes, K triples are taken to be (p, 0, v), and the rest are taken to be (p, q, 0), taking the power grid in steps 1-1) -1-4) as an example, a node 2 is a PV node, and the row of the input matrix can be represented as:
Input_14=[v1θ1p20v2p3p4p5000000000q3q4q5000000000
G12~G1(14)G23~G2(14)G34~G3(14)G45~G4(14)G56~G5(14)G67~G6(14)G78~G7(14)G89~G8(14)G9(10)~G9(14)G(10)(11)~G(10)(14)G(11)(12)~G(11)(14)G(12)(13)~G(12)(14)G(13)(14)~G(13)(14)B12~B1(14)B23~B2(14)B34~B3(14)B45~B4(14)B56~B5(14)B67~B6(14)B78~B7(14)B89~B8(14)B9(10)~B9(14)B(10)(11)~B(10)(14)B(11)(12)~B(11)(14)B(12)(13)~B(12)(14)B(13)(14)~B(13)(14)
when k is 0 in step 1-1), i.e., when node 2 is PQ node, the row of the input matrix is tabulated
Shown as follows:
Input_14=[v1θ1p2q20p3p4p5000000000q3q4q5000000000
G12~G1(14)G23~G2(14)G34~G3(14)G45~G4(14)G56~G5(14)G67~G6(14)G78~G7(14)G89~G8(14)G9(10)~G9(14)G(10)(11)~G(10)(14)G(11)(12)~G(11)(14)G(12)(13)~G(12)(14)G(13)(14)~G(13)(14)B12~B1(14)B23~B2(14)B34~B3(14)B45~B4(14)B56~B5(14)B67~B6(14)B78~B7(14)B89~B8(14)B9(10)~B9(14)B(10)(11)~B(10)(14)B(11)(12)~B(11)(14)B(12)(13)~B(12)(14)B(13)(14)~B(13)(14)],
1-6) takes the voltage magnitude v and voltage phase angle theta of all nodes as a certain row of the tag matrix. Label (R)
This row of the matrix can be represented as:
Target_14=[v1 v2 v3 v4 v5 v6 v7 v8 v9 v(10) v(11) v(12) v(13) v(14) θ1 θ2 θ3 θ4θ5 θ6 θ7 θ8 θ9 θ(10) θ(11) θ(12) θ(13) θ(14)],
taking the power grid in steps 1-1) -1-4) as an example, the row of the tag matrix can be represented as:
Target_5=[v1 v2 v3 v4 v5 0 0 0 0 0 0 0 0 0 θ1 θ2 θ3 θ4 θ5 0 0 0 0 0 0 0 0 0],
and dividing the generated data set into a training set, a verification set and a test set. In this embodiment, the training set, the verification set, and the test set are 400000, 60000, and 40000, respectively;
for each training data (x) in step 2)i,ti) The following steps are executed:
3-1) to tiApplying a small perturbation
Figure GDA0002173915070000101
To obtain
Figure GDA0002173915070000102
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
Figure GDA00021739150700001021
And reactive power
Figure GDA0002173915070000103
Obtained after small perturbations
Figure GDA0002173915070000104
Active power
Figure GDA00021739150700001022
And reactive power
Figure GDA0002173915070000105
Can be expressed as:
Figure GDA0002173915070000106
Figure GDA0002173915070000107
Figure GDA0002173915070000108
3-2) to tiApplying a large perturbation
Figure GDA0002173915070000109
To obtain
Figure GDA00021739150700001010
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
Figure GDA00021739150700001011
And reactive power
Figure GDA00021739150700001012
Obtained after large disturbances
Figure GDA00021739150700001013
Active power
Figure GDA00021739150700001014
And reactive power
Figure GDA00021739150700001015
Can be expressed as:
Figure GDA00021739150700001016
Figure GDA00021739150700001017
Figure GDA00021739150700001018
3-3) mixing
Figure GDA00021739150700001019
As a certain row of the input matrix and the label matrix in the positive sample set respectively,
Figure GDA00021739150700001020
respectively used as a negative sample set input matrix and a certain row of a label matrix;
the triple loss function based twin neural network model is obtained after full training by taking Tensorflow as a platform, python as a programming language and a random gradient algorithm as an optimization method, and can be described as follows, as shown in the attached figure 1:
y1=Wxi+b,
Figure GDA0002173915070000111
Figure GDA0002173915070000112
d1=||y1-y2||,
d2=||y1-y3||,
Figure GDA0002173915070000113
wherein the embedding layer coefficient P consists of the final (W, b);
taking the power flow embedding layer in the step 2) as a first hidden layer of the deep neural network, and training the deep neural network on the basis of the first hidden layer to calculate a variable topology power grid with the number of nodes not more than N. Taking the residual error network as an example, the final residual error network model structure with an embedded layer is shown in fig. 5;
and (5) for any power grid needing load flow calculation, obtaining a corresponding load flow solution by using the deep neural network trained in the step 5). Taking the five-node power grid as shown in fig. 4 as an example, the following steps are performed:
6-1) sampling the network to obtain related parameters, wherein the node parameters of the five-node power grid are shown in table 1, and the line parameters are shown in table 2;
table 1: five-node grid node parameters
Figure GDA0002173915070000114
Table 2: five-node power grid line parameters
Figure GDA0002173915070000121
6-2) obtaining the real part G and the imaginary part B of the admittance matrix Y and the Input of the deep neural network according to the table 1-2:
Figure GDA0002173915070000122
Figure GDA0002173915070000123
Input=[1.0501.050523.71.60000000001.01.30.8000000000
0000000000000000000000000-0.8299-0.6240000000000-0.7547000000000000000000000000000000000000000000000000000000
0033.3333000000000066.6667000000000003.11203.90020000000002.6415000000000000000000000000000000000000000000000000000000],
6-3) taking the input as the input of the deep neural network, and carrying out forward calculation to obtain corresponding output.
The feasibility of the invention is illustrated by the data below. Table 3 shows the errors on the training set and on the test set for several stacked self-encoders without embedded layers, taking N50 and K3.
Table 3: load flow calculation result based on stacked self-encoder
Figure GDA0002173915070000131
According to table 3, it can be found that networks with a large number of hidden layer neurons and a large number of hidden layers tend to have better training and testing results. For example, by comparing the network structures of 2554-; by comparing the stacked self-coding network with the network structure 2554-.
Table 4: stack-type self-encoder training and test time
Figure GDA0002173915070000132
Table 4 shows training and testing times of two different structures of the triple-hidden-layer stacked self-encoder, where the testing time is the time required for calculating a load flow value of the fourteen-node power grid shown in fig. 2, such as the fourteen-node power grid shown in fig. 2, a power grid node parameter is shown in table 5, and a power grid line parameter is shown in table 6;
table 5: fourteen-node power grid node parameter
Figure GDA0002173915070000141
Table 6: fourteen-node power grid line parameter
Figure GDA0002173915070000142
Figure GDA0002173915070000151
For a fourteen-node grid as shown in fig. 2, the time required for newton-raphson method is about 0.113314s, and the results can be obtained faster using a stacked self-encoder according to the results shown in table 4. For a fourteen-node power grid as shown in fig. 2, when the resistance value is increased by 100 times, the jacobian matrix has singularities when a newton-raphson method is adopted, so that the error of the jacobian matrix oscillates up and down and cannot be converged. When the stack-type self-encoder with the network structure of 2554-2000-1000-500-100 in table 3 is used to calculate the current value of the network and the calculated result is re-introduced into the current equation, the maximum error of the active power is 0.03(KVA) and the maximum error of the reactive power is 0.02(KVA), which shows that the present invention can provide the reference value of the pathological current to a certain extent.
Through the embodiment, on one hand, the deep neural network is proved to have the capability of solving the problem of load flow calculation, and on the other hand, the advantages of the invention in the aspects of calculation rapidity and convergence are demonstrated through comparison with Newton's method. To further illustrate the capability of the embedding layer mentioned in the present invention, we will train 4 different deep neural networks with embedding layers, i.e. shallow BP neural network, deep BP neural network and deep ReLU neural network, and residual error network, taking N-14 and K-1 as examples, and apply them to the five-node power grid as shown in fig. 4.
Tables 7-10 show simulation results for shallow BP, deep BP, and deep ReLU neural networks with embedded layers, and residual error networks, respectively. The unit of the voltage amplitude and the voltage amplitude error in the table is p.u., and the unit of the voltage angle and the voltage angle error is degree.
Table 7: shallow BP neural network load flow calculation experiment result with embedded layer
Figure GDA0002173915070000152
Figure GDA0002173915070000161
Table 8: deep BP neural network load flow calculation experimental result with embedded layer
Figure GDA0002173915070000162
Table 9: deep ReLU neural network load flow calculation experimental result with embedded layer
Figure GDA0002173915070000163
Table 10: residual network load flow calculation experiment result with embedded layer
Figure GDA0002173915070000164
Figure GDA0002173915070000171
Table 7 shows the experimental results of applying a trained shallow BP neural network with an embedded layer to a five-node grid as shown in fig. 4. According to the voltage amplitude error and the voltage angle error, the shallow neural network can be considered to be used for load flow calculation of a power grid to a certain extent, but the accuracy is poor;
table 8 shows the experimental results of applying a trained deep BP neural network with embedded layers (ten layers) to a five-node grid as shown in fig. 4. By comparing the results between tables 7 and 8, the deep BP neural network is far less effective in simulation than the shallow BP neural network because the deep BP neural network has a problem of gradient disappearance. Meanwhile, the problem that the gradient disappears cannot be solved by the tide embedding technology is also explained;
table 9 shows the experimental results of applying a trained deep ReLU neural network with embedded layers (ten layers) to a five-node grid as shown in fig. 4. By comparing table 8 with table 9, it can be found that the error of the deep ReLU neural network is much smaller than that of the deep BP neural network with the same structure;
table 10 shows the experimental results of applying a trained deep residual network with embedded layers (ten layers) to a five-node grid as shown in fig. 4. By comparing tables 7 to 10, the residual network is best in performance.
Table 11: network simulation results without embedded layer
Figure GDA0002173915070000172
Table 11 shows the simulation results for each network without an embedded layer. By comparing tables 7 to 11, the performance of each network except the deep BP neural network is improved to a certain extent under the support of the tide embedding technology, and even though the simplest shallow BP neural network is used, a relatively good experimental result can be obtained, which shows that the tide embedding technology can dig the potential characteristics of the input data of the power grid to a certain extent, thereby further improving the expressive force of the model.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and improvements can be made without departing from the spirit of the present invention, and these modifications and improvements should also be considered as within the scope of the present invention.

Claims (7)

1. A power flow calculation method based on a power flow embedding technology is characterized by comprising the following steps: the method comprises the following steps:
1) determining the maximum node number N, and constructing a training set K, a verification set V and a test set T;
2) constructing a corresponding positive sample set K for the training set K in the step 1)+And negative sample set K-
2-1) for data (x) in each training seti,ti) The following steps are executed:
first pair tiApplying a small perturbation to obtain
Figure FDA0003633374200000011
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
Figure FDA0003633374200000012
And reactive power
Figure FDA0003633374200000013
Die pair ofiApplying a large perturbation
Figure FDA0003633374200000014
Reserving an original admittance matrix, and calculating the active power of each node according to a load flow equation
Figure FDA0003633374200000015
And reactive power
Figure FDA0003633374200000016
Outputting corresponding positive samples
Figure FDA0003633374200000017
And negative examples
Figure FDA0003633374200000018
2-2) mixing
Figure FDA0003633374200000019
As a certain row of the input matrix and the label matrix in the positive sample set,
Figure FDA00036333742000000110
Figure FDA00036333742000000111
as a certain row of the input matrix and the label matrix in the negative sample set, a positive sample set K of the training set K is output+And negative sample set K-
3) Based on the training set K and the positive sample set K in the step 2)+Negative sample set K-And obtaining the trend after full training by using the triplet-based twin neural networkAn embedding layer P;
3-1) based on triple twin neural network model, training x in set KiPositive sample set K+In
Figure FDA00036333742000000112
Negative sample set K-In
Figure FDA00036333742000000113
As an input of the twin neural network, the forward process of the twin neural network is described as:
y1=Wxi+b,
Figure FDA00036333742000000114
Figure FDA00036333742000000115
d1=||y1-y2||,
d2=||y1-y3||,
Figure FDA0003633374200000021
3-2) minimizing a Loss function by adopting an Adam optimization method, and outputting a weight value from an input layer to a hidden layer and an offset (W, b) as an embedded layer parameter P after full training;
4) taking the trend embedded layer in the step 3) as a first hidden layer of the deep neural network, training the deep neural network on the basis, and reserving parameters of the trained deep neural network;
5) and (4) for any power grid needing load flow calculation, using the deep neural network trained in the step 4) to obtain a corresponding load flow solution through forward calculation.
2. The power flow calculation method based on the power flow embedding technology as claimed in claim 1, wherein the step 4) further comprises:
4-1) selecting a deep neural network, and selecting a sample x in a training set KiAs its input, t in the training set KiAs a label, initializing parameters of a first hidden layer of the network into embedded layer parameters P, selecting an initialization mode for parameters of other layers according to the selected deep neural network, and describing a forward calculation process of the network as follows:
oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi),
Figure FDA0003633374200000022
4-2) adopting Adam optimization method to minimize Loss function, outputting after full training and reserving parameter (W) of each layer1,b1,W2,b2,…,Wn,bn)。
3. The power flow calculation method based on the power flow embedding technology as claimed in claim 1, wherein the step 5) further comprises:
5-1) for any power grid, the active power and the reactive power of a PQ node, the active power and the voltage amplitude of a PV node, the voltage amplitude and the voltage phase angle of a balance node, and the input x of a neural network on the real part and the imaginary part of an admittance matrixiBring into oi=f(p,W1,b1,W2,b2,…,Wn,bn,xi) In, output oiI.e. voltage magnitude and voltage angle for all nodes.
4. The power flow calculation method based on the power flow embedding technology as claimed in claim 1, wherein: all training samples as well as test samples were obtained from the simulation.
5. The power flow calculation method based on the power flow embedding technology as claimed in claim 1, wherein: the twin neural network in the step 3) adopts a single hidden layer feedforward neural network.
6. The power flow calculation method based on the power flow embedding technology as claimed in claim 1, wherein: the related neural network comprises a shallow BP neural network, a deep ReLU neural network, a stacked self-encoder and a residual error network, wherein the stacked self-encoder adopts a layer-by-layer initialization method to initialize parameters and then carries out fine tuning through an Adam algorithm, the parameters of the shallow BP neural network, the deep BP neural network and the deep ReLU neural network are optimized through a back propagation algorithm, and the parameters of the residual error network are directly optimized through the Adam algorithm.
7. The power flow calculation method based on the power flow embedding technology as claimed in claim 1, wherein: the trained deep neural network can be used for calculating the tidal current value of the power grid with any topological structure in the N nodes.
CN201910495388.XA 2019-06-10 2019-06-10 Load flow calculation method based on load flow embedding technology Active CN112072634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910495388.XA CN112072634B (en) 2019-06-10 2019-06-10 Load flow calculation method based on load flow embedding technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910495388.XA CN112072634B (en) 2019-06-10 2019-06-10 Load flow calculation method based on load flow embedding technology

Publications (2)

Publication Number Publication Date
CN112072634A CN112072634A (en) 2020-12-11
CN112072634B true CN112072634B (en) 2022-06-24

Family

ID=73658132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910495388.XA Active CN112072634B (en) 2019-06-10 2019-06-10 Load flow calculation method based on load flow embedding technology

Country Status (1)

Country Link
CN (1) CN112072634B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2950296C (en) * 2014-06-10 2022-12-13 Siemens Healthcare Diagnostics Inc. Drawer vision system
CN108846410A (en) * 2018-05-02 2018-11-20 湘潭大学 Power Quality Disturbance Classification Method based on sparse autocoding deep neural network
CN108898249A (en) * 2018-06-28 2018-11-27 鹿寨知航科技信息服务有限公司 A kind of electric network fault prediction technique
CN108879708B (en) * 2018-08-28 2021-06-22 东北大学 Reactive voltage partitioning method and system for active power distribution network

Also Published As

Publication number Publication date
CN112072634A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN105843073B (en) A kind of wing structure aeroelastic stability analysis method not knowing depression of order based on aerodynamic force
CN104113061B (en) A kind of distribution network three-phase power flow method containing distributed power source
CN111682530B (en) Method, device, equipment and medium for determining out-of-limit probability of voltage of power distribution network
CN112287605B (en) Power flow checking method based on graph convolution network acceleration
WO2019184132A1 (en) Data driving-based grid power flow equation linearization and solving method
Müller et al. Artificial neural networks for load flow and external equivalents studies
CN111798037B (en) Data-driven optimal power flow calculation method based on stacked extreme learning machine framework
CN113221475A (en) Grid self-adaption method for high-precision flow field analysis
Cherifi et al. Numerical methods to compute a minimal realization of a port-Hamiltonian system
CN106227767A (en) A kind of based on the adaptive collaborative filtering method of field dependency
CN115603310A (en) Energy system network topology generation method based on intelligent evolution and stability evaluation
Hoyos et al. Airfoil shape optimization: Comparative study of meta-heuristic algorithms, airfoil parameterization methods and reynolds number impact
CN111639463B (en) XGboost algorithm-based frequency characteristic prediction method for power system after disturbance
CN112072634B (en) Load flow calculation method based on load flow embedding technology
Selvan On the effect of shape parameterization on aerofoil shape optimization
Chen et al. Global exponential p-norm stability of BAM neural networks with unbounded time-varying delays: A method based on the representation of solutions
CN115146527A (en) Multi-physical-field model coupling solving method based on deep learning
CN111179110B (en) Virtual power plant variable order aggregation equivalent robust dynamic model modeling method and device
Kalra et al. Automated scheme for linearisation points selection in TPWL method applied to non‐linear circuits
Tang et al. Impulsive Synchronization of Complex Dynamical Networks
CN111753252A (en) Nataf transformation-based random variable sample generation method and system
Ji et al. Data preprocessing method and fault diagnosis based on evaluation function of information contribution degree
CN102662916A (en) Lagrange function based least-squares multi-objective optimization method
Sulikowski et al. Iterative learning control for a discretized sub-class of spatially interconnected systems
Mahata et al. Determination of multiple solutions of load flow equations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province

Applicant after: HANGZHOU City University

Address before: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province

Applicant before: Zhejiang University City College

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220127

Address after: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province

Applicant after: HANGZHOU City University

Applicant after: ZHEJIANG TIANNENG POWER ENERGY Co.,Ltd.

Address before: 310000 No.51 Huzhou street, Gongshu District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU City University

GR01 Patent grant
GR01 Patent grant