CN109995031B - Probability power flow deep learning calculation method based on physical model - Google Patents

Probability power flow deep learning calculation method based on physical model Download PDF

Info

Publication number
CN109995031B
CN109995031B CN201910367856.5A CN201910367856A CN109995031B CN 109995031 B CN109995031 B CN 109995031B CN 201910367856 A CN201910367856 A CN 201910367856A CN 109995031 B CN109995031 B CN 109995031B
Authority
CN
China
Prior art keywords
formula
layer
power
power system
equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910367856.5A
Other languages
Chinese (zh)
Other versions
CN109995031A (en
Inventor
余娟
杨燕
杨知方
向明旭
代伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910367856.5A priority Critical patent/CN109995031B/en
Publication of CN109995031A publication Critical patent/CN109995031A/en
Application granted granted Critical
Publication of CN109995031B publication Critical patent/CN109995031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Feedback Control In General (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a probability load flow deep learning calculation method based on a physical model, which mainly comprises the following steps: 1) acquiring power system data; 2) establishing a loss function of the probabilistic load flow analysis deep neural network, and updating a coding parameter theta of the deep neural network; 3) deep learning is carried out on the probability trend of the power system by utilizing a deep neural network; 4) establishing a probability trend deep learning calculation model; the method solves the problem that huge calculation cost and calculation precision are difficult to balance when solving the probability load flow by combining the data driving technology and the physical mechanism in the power field.

Description

Probability power flow deep learning calculation method based on physical model
Technical Field
The invention relates to the field of electric power systems and automation thereof, in particular to a probabilistic power flow deep learning calculation method based on a physical model.
Background
In recent years, renewable power generation has rapidly developed on a global scale. Not to be neglected, the uncertainty of the power system increases dramatically with the massive access of intermittent renewable energy sources. The rapid increase of uncertainty can have great influence on various departments of the power system, and threatens the safe and stable operation of the power grid. The probabilistic power flow is an important tool for uncertainty analysis of the power system, various random factors can be fully considered, and comprehensive and important reference information is provided for planning and operating the power system. However, the probability load flow involves a large number of high-dimensional complex nonlinear equations, and the existing solving algorithm is difficult to effectively balance the calculation cost and the calculation precision of the probability load flow. Therefore, efficient solutions to probabilistic power flows have become an urgent problem in high-proportion renewable energy power systems.
Disclosure of Invention
The present invention is directed to solving the problems of the prior art.
The technical scheme adopted for achieving the purpose of the invention is that the probability power flow deep learning calculation method based on the physical model mainly comprises the following steps:
1) power system data is acquired. The power system data mainly includes wind speed, photovoltaic power and load.
2) And establishing a loss function of the probability trend analysis deep neural network, and updating a coding parameter theta and a parameter r of the deep neural network.
The method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
Figure BDA0002048814550000011
where m is the number of training samples per training round L is the number of layers YoutIs the output characteristic vector of the power system probability power flow. XinIs an input feature vector of the power system probability power flow.
Figure BDA0002048814550000012
Representing the first layer encoding function.
Figure BDA0002048814550000013
Representing the L th layer encoding function loss represents the loss function.
Wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure BDA0002048814550000014
As follows:
Figure BDA0002048814550000015
in the formula, RiIs a function of activation of layer i neurons. Weight matrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function.
When i is L, the i-th layer coding function
Figure BDA0002048814550000016
As follows:
Figure BDA0002048814550000017
in the formula, RLAs a function of activation of layer L neurons weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
Figure BDA0002048814550000021
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i is L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2.2) preprocessing input data and output data of the power system probability power flow, namely:
Figure BDA0002048814550000022
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
2.3) updating the parameter theta of the deep neural network based on the target function loss, namely:
Figure BDA0002048814550000023
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000024
for the objective function loss at tthThe partial derivative of the theta variable when changing ⊙ is the hamiltonian r is the attenuation factor p and is a constant η is the learning rate of the neural network.
3) And deep learning is carried out on the probability trend of the power system by utilizing a deep neural network.
The method for deep learning the probability trend of the power system by using the deep neural network comprises the following main steps:
3.1) determining the probability power flow of the power system, namely establishing a reactive power equation and an active power equation of the probability power flow of the power system.
Wherein, the ith in the power systemthBus to jththBranch active power P of busijAs follows:
Pij=Gij(Vi 2-ViVjcosθij)-BijViVjsinθij。 (8)
in the formula, ViIs the voltage amplitude of the bus i. ThetaijThe voltage phase angle difference between bus i and bus j. GijAnd BijAre respectively the iththA bus and a jth busthConductance and susceptance between the buses. VjIs the voltage amplitude of bus j.
Ith in power systemthBus to jththBranch reactive power Q of busijAs follows:
Qij=-Bij(Vi 2-ViVjcosθij)-GijViVjsinθij。 (9)
3.2) determining a training target loss by taking the node voltage of the power system as an output vectornewNamely:
Figure BDA0002048814550000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000032
and (4) an active power equation of a branch of the power system.
Figure BDA0002048814550000033
And (4) a reactive power equation of a branch of the power system. lossnewIs an updated loss function.
Active power equation of branch of electric power system
Figure BDA0002048814550000034
As follows:
Figure BDA0002048814550000035
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000036
is normalized active power. PoutThe active power of the branches of the standardized power system is obtained. | | represents a norm.
Figure BDA0002048814550000037
In the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000038
is normalized active power. QoutThe active power of the branches of the standardized power system is obtained.
3.3) update the weight w with back propagation, i.e.:
w(i,T+1)=w(i,T)-Δw(i,T)。 (13)
in the formula,. DELTA.w(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe amount of change in the weight matrix of the layer. w is a(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer. w is a(i,T+1)Is the (T +1)thUpdating of individual parametersFrom the i-ththLayer to i +1thThe weight of the layer.
T ththWhen each parameter is updated, from the iththLayer to i +1thAmount of change Δ w of weight matrix of layer(i,T)As follows:
Figure BDA0002048814550000039
in the formula, R(i,T)Is the T ththWhen the secondary weight is updated iteratively, the learning rate attenuates the variable.
R(i,T)=ρ*R(i,T-1)+(1-ρ)*dw(i,T)⊙dw(i,T)。 (15)
In the formula, dw(i,T)Is the amount of weight change. R(i,T-1)Is the first (T-1)thWhen the secondary weight is updated iteratively, the learning rate attenuates the variable.
Weight change amount dw(i,T)As follows:
Figure BDA00020488145500000310
where r is the serial number of the initial sample in the batch, and m is the sample size of the batch. k is an arbitrary sample sequence.
3.4) update bias b with back propagation, i.e.:
b(i,T+1)=b(i,T)-Δb(i,T)。 (17)
in the formula,. DELTA.b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe amount of change in the bias matrix of the layer. b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers. b(i,T+1)Is the (T +1)thWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers.
T ththWhen each parameter is updated, from the iththLayer to i +1thAmount of change Δ b of bias matrix of layer(i,T)As shown below:
Figure BDA0002048814550000041
In the formula (II), R'(i,T)Is the T ththThe learning rate decays the variable as the secondary bias is iteratively updated.
R′(i,T)=ρ*R′(i,T-1)+(1-ρ)*db(i,T)⊙db(i,T)。 (19)
In the formula db(i,T)Is the amount of change in the bias. R'(i,T-1)Is the T-1thThe learning rate decays the variable as the secondary bias is iteratively updated.
Bias change db(i,T)As follows:
Figure BDA0002048814550000042
3.5) determining the original loss function loss and the updated loss function lossnewThe difference equation d (L) of (c) is as follows:
d(L)=d1+d2+d3。 (21)
in the formula (d)1、d2、d3Is a difference equation.
d(i)=d(i+1)wi,T-1⊙max(0,yi)。 (22)
In the formula, yiAnd outputting data for the deep neural network. w is a(i,T-1)Is the first (T-1)thWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer.
Weight change amount dw(i,T)As follows:
dw(i,T)=d(i)Tyi/m。 (23)
Figure BDA0002048814550000051
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000052
is the output of the deep neural network. Y is
Figure BDA0002048814550000053
The denormalized value of (a).
Figure BDA0002048814550000054
Figure BDA0002048814550000055
In the formula, Y is an output vector of the probability power flow.
Figure BDA0002048814550000058
Is the output of the deep neural network; y is
Figure BDA0002048814550000059
The denormalized value of (a). And P is the active power of the power system.
Figure BDA00020488145500000510
The estimation value of the active power of the deep neural network on the power system is obtained. And Q is reactive power of the power system.
Figure BDA00020488145500000511
And the estimation value of the reactive power of the deep neural network on the electric power system is obtained.
Equation of difference d2And difference equation d3The contribution weights to the output feature vector of the deep neural network are respectively as follows:
Figure BDA0002048814550000056
in the formula (d)θ[L]、d1,θ、d2,θAnd d3,θIs denoted by dθ[L]Equation of difference d1Equation of difference d2And difference equation d3The medium voltage phase angle outputs a vector. dv[L]、d1,v、d2,vAnd d3,vIs denoted by dv[L]Equation of difference d1Equation of difference d2And difference equation d3Intermediate voltage magnitude output vector d L]Is the equation of difference d2And difference equation d3The total contribution weight of. dθ[L]Express difference equation d2And difference equation d3Weight of contribution to the voltage phase angle output vector of the deep neural network. dv[L]Express difference equation d2And difference equation d3A weight of contribution to a voltage magnitude output vector of the deep neural network.
The empirical value α and the empirical value β are respectively as follows:
Figure BDA0002048814550000057
in the formula, max is a function that returns the maximum value, and abs is a function that returns the absolute value.
4) And establishing a probabilistic power flow deep learning calculation model.
The method for establishing the probabilistic power flow deep learning calculation model mainly comprises the following steps:
4.1) establishing a power system power flow probability equation as shown in the formulas 29 to 34 respectively.
Figure BDA0002048814550000061
In the formula, ViIs the voltage amplitude of the bus i. ThetaijIs the voltage phase angle difference between bus i and bus j. GijAnd BijAre respectively the iththA bus and a jth busthConductance and susceptance between the buses.
Figure BDA0002048814550000062
Figure BDA0002048814550000063
Figure BDA0002048814550000064
Figure BDA0002048814550000065
Figure BDA0002048814550000066
4.2) removing voltage amplitude data in the flow input data of the deep neural network.
4.3) removing the phase angle data in the reactive power input data of the deep neural network.
4.4) based on the steps 4.1 to 4.3, establishing a probabilistic power flow deep learning calculation model, which mainly comprises the following steps:
4.4.1) determining the weight of the probability trend deep learning calculation model, namely:
Figure BDA0002048814550000067
wherein d is1,θAnd difference equation d (L) are respectively as follows:
Figure BDA0002048814550000068
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000069
is an estimate of the parameter theta by the neural network. Theta is a neural network coding parameter.
d(L)=d1。 (37)
4.4.2) establishing a probabilistic power flow deep learning calculation model, namely:
Figure BDA0002048814550000071
5) and calculating the probability load flow of the power system to be measured by utilizing the probability load flow deep learning calculation model.
The technical effect of the present invention is undoubted. The invention realizes the effective combination of a physical model and a data-driven deep learning technology, provides a model-based deep learning simplification method according to the physical characteristics of the power transmission network of the power system, and can effectively combine the advantages of model driving by light-weight calculation. The method solves the problem that huge calculation cost and calculation precision are difficult to balance when solving the probability load flow by combining the data driving technology and the physical mechanism in the power field.
Drawings
FIG. 1 is a process flow diagram.
Detailed Description
The present invention is further illustrated by the following examples, but it should not be construed that the scope of the above-described subject matter is limited to the following examples. Various substitutions and alterations can be made without departing from the technical idea of the invention and the scope of the invention is covered by the present invention according to the common technical knowledge and the conventional means in the field.
Example 1:
referring to fig. 1, the probabilistic power flow deep learning calculation method based on the physical model mainly includes the following steps:
1) and acquiring data of the power system and establishing a physical model of the power system. The power system data mainly includes wind speed, photovoltaic power and load.
2) And establishing a loss function of the probabilistic power flow analysis deep neural network, and updating a parameter theta of the deep neural network.
The method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
Figure BDA0002048814550000072
where m is the number of training samples per training round L is the number of layers YoutIs the output characteristic vector of the power system probability power flow. XinIs an input feature vector of the power system probability power flow.
Figure BDA0002048814550000073
Representing the first layer encoding function.
Figure BDA0002048814550000074
Represents the L th layer encoding function loss represents the square loss function.
Wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure BDA0002048814550000075
As follows:
Figure BDA0002048814550000076
in the formula, RiIs a function of activation of layer i neurons. Weight matrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function. When i is 1, X is Xin. When the value of i is 2, the ratio of i to i is,
Figure BDA0002048814550000085
and so on.
When i is L, the i-th layer coding function
Figure BDA0002048814550000086
As follows:
Figure BDA0002048814550000081
in the formula, RLAs a function of activation of layer L neurons weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer Re L U activates the function RiAs follows:
Figure BDA0002048814550000082
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i is L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2.2) in order to improve the training efficiency of DNN, the input data and the output data of PPF should be preprocessed to eliminate the adverse effect of singular samples and numerical problems on the training process. Outliers can be efficiently processed by normalizing the samples using the z-score method and only the mean and standard deviation of the historical statistics are required. Furthermore, it can preserve the distribution characteristics more efficiently than other pre-processing methods (such as the min-max method).
Preprocessing input data and output data of the power system probability power flow, namely:
Figure BDA0002048814550000083
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
2.3) this example uses the RMSProp method as the learning algorithm. It divides the training samples into several batches. Each batch of samples is trained to update parameters in turn. The RMSProp method adaptively updates the learning rate for each parameter by keeping a moving average of the gradient squared, reducing the training burden and avoiding local minimization. The deep neural network parameters are updated by the RMSProp algorithm.
Updating the deep neural network parameter theta based on the target function loss, namely:
Figure BDA0002048814550000084
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000087
for the objective function loss at tthThe partial derivative of the theta variable when changing ⊙ is the hamiltonian r is the attenuation factor rho and is a constant η is the learning rate of the neural network rho 0.99 η 0.001 1 × 10-8
Figure BDA0002048814550000091
For the objective function loss at tthPartial derivative of the theta variable when changing. ThetatIs at the tthUpdated parameter, θt-1Is the t-1thUpdated parameter, rtIs at the tthUpdated parameter, rt-1Is the t-1thThe updated parameters.
3) And deep learning is carried out on the probability trend of the power system by utilizing a deep neural network. The goal of deep learning is to obtain optimal parameters for DNN (deep neural network)
The method for deep learning the probability trend of the power system by using the deep neural network comprises the following main steps:
3.1) determining the probability power flow of the power system, namely establishing a reactive power equation and an active power equation of the probability power flow of the power system.
Wherein, the ith in the power systemthBus to jththBranch active power P of busijAs follows:
Pij=Gij(Vi 2-ViVjcosθij)-BijViVjsinθij。 (8)
in the formula, ViIs the voltage amplitude of the bus i. ThetaijThe voltage phase angle difference between bus i and bus j. GijAnd BijAre respectively the iththA bus and a jth busthConductance and susceptance between the buses. VjIs the voltage amplitude of bus j.
Ith in power systemthBus to jththBus bar supportRoad reactive power QijAs follows:
Qij=-Bij(Vi 2-ViVjcosθij)-GijViVjsinθij。 (9)
3.2) taking the node voltage of the power system as an output vector, and adding a branch load flow equation as a penalty term into a training target loss by combining the physical mechanism and the field knowledge of electrical engineering, so that the training target lossnewAs follows:
Figure BDA0002048814550000092
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000093
and (4) an active power equation of a branch of the power system.
Figure BDA0002048814550000094
And (4) a reactive power equation of a branch of the power system. lossnewIs an updated loss function.
Active power equation of branch of electric power system
Figure BDA0002048814550000095
As follows:
Figure BDA0002048814550000096
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000097
is normalized active power. PoutThe active power of the branches of the standardized power system is obtained. | | represents a norm.
Figure BDA0002048814550000098
In the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000099
is normalized active power. QoutThe active power of the branches of the standardized power system is obtained.
3.3) update the weight w with back propagation, i.e.:
w(i,T+1)=w(i,T)-Δw(i,T)。 (13)
in the formula,. DELTA.w(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe amount of change in the weight matrix of the layer. w is a(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer. w is a(i,T+1)Is the (T +1)thWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer.
T ththWhen each parameter is updated, from the iththLayer to i +1thAmount of change Δ w of weight matrix of layer(i,T)As follows:
Figure BDA0002048814550000101
in the formula, R(i,T)Is the T ththWhen the secondary weight is updated iteratively, the learning rate attenuates the variable.
R(i,T)=ρ*R(i,T-1)+(1-ρ)*dw(i,T)⊙dw(i,T)。 (15)
In the formula, dw(i,T)Is the amount of weight change. R(i,T-1)Is the first (T-1)thWhen the secondary weight is updated iteratively, the learning rate attenuates the variable.
Weight change amount dw(i,T)As follows:
Figure BDA0002048814550000102
where r is the serial number of the initial sample in the batch, and m is the sample size of the batch. k is an arbitrary sample sequence.
3.4) update bias b with back propagation, i.e.:
b(i,T+1)=b(i,T)-Δb(i,T)。 (17)
in the formula,. DELTA.b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe amount of change in the bias matrix of the layer. b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers. b(i,T+1)Is the (T +1)thWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers.
T ththWhen each parameter is updated, from the iththLayer to i +1thAmount of change Δ b of bias matrix of layer(i,T)As follows:
Figure BDA0002048814550000111
in the formula (II), R'(i,T)Is the T ththThe learning rate decays the variable as the secondary bias is iteratively updated.
R′(i,T)=ρ*R′(i,T-1)+(1-ρ)*db(i,T)⊙db(i,T)。 (19)
In the formula db(i,T)Is the amount of change in the bias. R'(i,T-1)Is the T-1thThe learning rate decays the variable as the secondary bias is iteratively updated.
Bias change db(i,T)As follows:
Figure BDA0002048814550000113
3.5) determining the original loss function loss and the updated loss function lossnewThe difference equation d (L) of (c) is as follows:
d(L)=d1+d2+d3。 (21)
in the formula (d)1、d2And d3Is a difference equation.
d(i)=d(i+1)wi,T-1⊙max(0,yi)。 (22)
Wherein d (i +1) is the (i +1) ththLayer difference equation. max (×) indicates taking the maximum value. y isiAnd outputting data for the deep neural network. w is a(i,T-1)Is the first (T-1)thWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer.
Weight change amount dw(i,T)As follows:
dw(i,T)=d(i)Tyi/m。 (23)
in the formula, superscript T denotes transpose.
Figure BDA0002048814550000114
In the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000115
is the output of the deep neural network. Y is
Figure BDA0002048814550000116
The denormalized value of (a).
Figure BDA0002048814550000112
Figure BDA0002048814550000121
In the formula, Y is an output vector of the probability power flow.
Figure BDA0002048814550000123
Is the output of the deep neural network. Y is
Figure BDA0002048814550000124
The denormalized value of (a). And P is the active power of the power system.
Figure BDA0002048814550000125
The estimation value of the active power of the deep neural network on the power system is obtained. Q is electric powerAnd (5) system reactive power.
Figure BDA0002048814550000126
And the estimation value of the reactive power of the deep neural network on the electric power system is obtained.
As can be seen from equations (13) - (26), the proposed objective function (9) will increase the update step size of the parameter w when the update direction decreases with both voltage and power calculation errors. This may facilitate training convergence speed. In addition, when the parameter update directions of (24) and (25) and (26) are different, the branch power constraint added in the loss function is expected to reduce or prevent the deep neural network from over-fitting the node voltage.
As can be seen from equations (21) - (26), the parameter update direction can be guided by (25) and (26), since
Figure BDA0002048814550000127
The error of (2) can be very large. In fact, this strategy is commonly referred to as normalization in the deep learning field. From the perspective of deep learning, the update direction of the core target (high-precision voltage) of the present embodiment should be dominant. The output of the deep neural network contains the voltage amplitude and phase angle (i.e., Y)out=[V,θ])。d2+d3The contribution to the output feature vector should not be greater than d1This may disturb or diverge the DNN training. d2+d3Should be slightly less than d1The contribution of (c). Thus, the present invention proposes d2And d3Two contribution weights to the output feature vector.
Equation of difference d2And difference equation d3The contribution weights to the output feature vector of the deep neural network are respectively as follows:
Figure BDA0002048814550000122
in the formula (d)θ[L]、d1,θ、d2,θAnd d3,θIs denoted by dθ[L]Equation of difference d1Equation of difference d2And difference equation d3The medium voltage phase angle outputs a vector. dv[L]、d1,v、d2,vAnd d3,vIs denoted by dv[L]Equation of difference d1Equation of difference d2And difference equation d3Intermediate voltage magnitude output vector d L]Is the equation of difference d2And difference equation d3The total contribution weight of. dθ[L]Express difference equation d2And difference equation d3Weight of contribution to the voltage phase angle output vector of the deep neural network. dv[L]Express difference equation d2And difference equation d3A weight of contribution to a voltage magnitude output vector of the deep neural network.
The empirical value α and the empirical value β are respectively as follows:
Figure BDA0002048814550000131
in the formula, max is a function that returns the maximum value, and abs is a function that returns the absolute value.
4) And establishing a probabilistic power flow deep learning calculation model.
The method for establishing the probabilistic power flow deep learning calculation model mainly comprises the following steps:
4.1) establishing a power system power flow probability equation as shown in the formulas 29 to 34 respectively.
Figure BDA0002048814550000132
In the formula, ViIs the voltage amplitude of the bus i. ThetaijIs the voltage phase angle difference between bus i and bus j. GijAnd BijAre respectively the iththA bus bar and the jthConductance and susceptance between the individual busbars.
Figure BDA0002048814550000133
Figure BDA0002048814550000134
Figure BDA0002048814550000135
Figure BDA0002048814550000136
Figure BDA0002048814550000137
4.2) removing voltage amplitude data in the flow input data of the deep neural network.
Deep neural networks mine the non-linear characteristics or relationships of the probability trend by quantifying the effects of input changes. In power systems, the voltage amplitude typically fluctuates at ± 5% p.u. However, under operating conditions, the range of nodal phase angle variation can reach 30 ° or more. Therefore, voltage amplitude is easier to learn than phase angle, which also requires guidance from a physical model. Furthermore, since the standard deviation of the voltage magnitude is much smaller than the standard deviation of the phase angle, it can be found in (25) and (26) that the model guides the influence on the voltage magnitude much smaller than the influence on the phase angle.
The guidance of the voltage amplitude is computationally expensive compared to the phase voltage in view of the respective computational complexity. From equations (29) - (34), if the guidance of the voltage amplitude is not removed, all equations (23) - (27) need to be performed, it can be seen that the calculation cost of the voltage amplitude guidance is about twice the voltage phase angle.
Therefore, the guidance of the voltage amplitude is removed in the general model-based deep learning method according to numerical analysis and computational complexity comparison.
4.3) removing the phase angle data in the reactive power input data of the deep neural network.
In a power transmission network, conductance and susceptance have the following relationship:
Figure BDA0002048814550000141
although the node phase angle may vary drastically with operating conditions, the phase angle difference of two nodes will typically not vary much. Thus, we can have:
sinθij<cosθij; (36)
from equations (35) and (36), it can be easily deduced that the absolute value of equation (30) is much smaller than equation (29). In addition to this, the present invention is,
Figure BDA0002048814550000146
is generally less than
Figure BDA0002048814550000147
Absolute value of (a). The absolute value of std (p) is usually smaller than the absolute value of std (q), since in practice the active load demand is higher than the reactive load demand. Therefore, it can be concluded from (23) - (27) that the reactive branch power has a much smaller impact on the training process than the active power flow. Therefore, the present embodiment eliminates reactive guidance of the phase angle. std (×) represents the standard deviation.
4.4) based on the steps 4.1 to 4.3, establishing a probabilistic power flow deep learning calculation model, which mainly comprises the following steps:
4.4.1) determining the weight of the probability trend deep learning calculation model, namely:
Figure BDA0002048814550000142
wherein d is1,θAnd difference equation d (L) are respectively as follows:
Figure BDA0002048814550000143
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000144
is an estimate of the parameter theta by the neural network.
d(L)=d1。 (37)
4.4.2) establishing a probabilistic power flow deep learning calculation model, namely:
Figure BDA0002048814550000145
5) and calculating the probability load flow of the power system to be measured by utilizing the probability load flow deep learning calculation model.
Example 2:
the probability power flow deep learning calculation method based on the physical model mainly comprises the following steps:
1) the system state is sampled. And randomly extracting random input variables of the system, including wind speed, photovoltaic power and load.
2) Inputting the system operating conditions (random input variables in step 1) and directly mapping the voltage amplitude and phase angle of all unresolved samples by the deep neural network, and directly solving power information by the amplitude phase angle. Iterative computation is not needed in the whole process, so that the computation speed of the probability load flow can be remarkably increased.
3) And (3) calculating and analyzing a probability power flow index based on the result of the step (2), wherein the probability power flow index comprises the average value, the standard deviation and the probability density function of all output variables.
Example 3:
a probabilistic load flow deep learning calculation method based on a physical model, mainly as in embodiment 2, wherein the main steps of establishing a loss function of a probabilistic load flow analysis deep neural network are as follows:
1) determining the objective function loss, namely:
Figure BDA0002048814550000151
where m is the number of training samples per training round L is the number of layers YoutIs the output characteristic vector of the power system probability power flow. XinIs an input feature vector of the power system probability power flow.
Figure BDA0002048814550000152
Representing the first layer encoding function.
Figure BDA0002048814550000153
Represents the L th layer encoding function loss represents the square loss function.
Wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure BDA0002048814550000154
As follows:
Figure BDA0002048814550000155
in the formula, RiIs a function of activation of layer i neurons. Weight matrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function.
When i is L, the i-th layer coding function
Figure BDA0002048814550000156
As follows:
Figure BDA0002048814550000157
in the formula, RLAs a function of activation of layer L neurons weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer Re L U activates the function RiAs follows:
Figure BDA0002048814550000158
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i is L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2) in order to improve the training efficiency of DNN, the input data and the output data of PPF should be preprocessed to eliminate the adverse effect of singular samples and numerical problems on the training process. Outliers can be efficiently processed by normalizing the samples using the z-score method and only the mean and standard deviation of the historical statistics are required. Furthermore, it can preserve the distribution characteristics more efficiently than other pre-processing methods (such as the min-max method).
Preprocessing input data and output data of the power system probability power flow, namely:
Figure BDA0002048814550000161
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
3) The present embodiment adopts the RMSProp method as a learning algorithm. It divides the training samples into several batches. Each batch of samples is trained to update parameters in turn. The RMSProp method adaptively updates the learning rate for each parameter by keeping a moving average of the gradient squared, reducing the training burden and avoiding local minimization. The deep neural network parameters are updated by the RMSProp algorithm.
Updating the deep neural network parameter theta based on the target function loss, namely:
Figure BDA0002048814550000162
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000163
for the objective function loss at tthThe partial derivative of the theta variable when changing ⊙ is the hamiltonian r is the attenuation factor rho and is a constant η is the learning rate of the neural network rho 0.99 η 0.001 1 × 10-8
Example 4:
a deep learning calculation method of probability power flow based on a physical model, mainly as shown in embodiment 2, wherein the deep learning of the probability power flow of a power system by using a deep neural network mainly comprises the following steps:
1) and determining the probability power flow of the power system, namely establishing a reactive power equation and an active power equation of the probability power flow of the power system.
Wherein, the ith in the power systemthBus to jththBranch active power P of busijAs follows:
Pij=Gij(Vi 2-ViVjcosθij)-BijViVjsinθij。 (8)
in the formula, ViIs the voltage amplitude of the bus i. ThetaijThe voltage phase angle difference between bus i and bus j. GijAnd BijAre respectively the iththA bus bar and the jthConductance and susceptance between the individual busbars.
Ith in power systemthBus to jththBranch reactive power Q of busijAs follows:
Qij=-Bij(Vi 2-ViVjcosθij)-GijViVjsinθij。 (9)
2) determining a training target loss by taking the node voltage of the power system as an output vectornewNamely:
Figure BDA0002048814550000171
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000172
and (4) an active power equation of a branch of the power system.
Figure BDA0002048814550000173
And (4) a reactive power equation of a branch of the power system. lossnewIs an updated loss function.
Active power equation of branch of electric power system
Figure BDA0002048814550000174
As follows:
Figure BDA0002048814550000175
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000176
is normalized active power. PoutThe active power of the branches of the standardized power system is obtained.
Figure BDA0002048814550000177
In the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000178
is normalized active power. QoutThe active power of the branches of the standardized power system is obtained.
3) Update the weights w with back propagation, i.e.:
w(i,T+1)=w(i,T)-Δw(i,T)。 (13)
wherein Δ w: (i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe amount of change in the weight matrix of the layer. w is a(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer. w is a(i,T+1)Is the (T +1)thWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer.
T ththWhen each parameter is updated, from the iththLayer to i +1thAmount of change Δ w of weight matrix of layer(i,T)As follows:
Figure BDA0002048814550000179
in the formula, R(i,T)Is the T ththWhen the secondary weight is updated iteratively, the learning rate attenuates the variable.
R(i,T)=ρ*R(i,T-1)+(1-ρ)*dw(i,T)⊙dw(i,T)。 (15)
In the formula, dw(i,T)Is the amount of weight change.
Weight change amount dw(i,T)As follows:
Figure BDA00020488145500001710
where r is the serial number of the initial sample in the batch, and m is the sample size of the batch.
4) Update bias b with back propagation, i.e.:
b(i,T+1)=b(i,T)-Δb(i,T)。 (17)
in the formula,. DELTA.b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe amount of change in the bias matrix of the layer. b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers. Δ b(i,T)Is the (T +1)thWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers.
T ththWhen each parameter is updated, from the iththLayer to i +1thAmount of change Δ b of bias matrix of layer(l,T)As follows:
Figure BDA0002048814550000181
in the formula (II), R'(i,T)Is as follows.
R′(i,T)=ρ*R′(i,T-1)+(1-ρ)*db(i,T)⊙db(i,T)。 (19)
In the formula db(i,T)Is the amount of change in the bias.
Bias change db(i,T)As follows:
Figure BDA0002048814550000182
5) determining an original loss function loss and an updated loss function lossnewThe difference equation d (L) of (c) is as follows:
d(L)=d1+d2+d3。 (21)
in the formula (d)1、d2、d3Respectively, are shown.
d(i)=d(i+1)wi,T-1⊙max(0,yi)。 (22)
In the formula, yiAnd outputting data for the deep neural network.
Weight change amount dw(i,T)As follows:
dw(i,T)=d(i)Tyi/m。 (23)
Figure BDA0002048814550000183
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814550000184
is the output of the deep neural network. Y is
Figure BDA0002048814550000185
The denormalized value of (a).
Figure BDA0002048814550000191
Figure BDA0002048814550000192
d2And d3The contribution weights to the output feature vector of the deep neural network are respectively as follows:
Figure BDA0002048814550000193
in the formula (d)θ[L]、d1,θ、d2,θAnd d3,θIs denoted by dθ[L]、d1、d2And d3The medium voltage phase angle outputs a vector. dv[L]、d1,v、d2,vAnd d3,vIs denoted by dv[L]、d1、d2And d3The medium voltage magnitude output vector.
The empirical value α and the empirical value β are respectively as follows:
Figure BDA0002048814550000194
in the formula, max is a function that returns the maximum value, and abs is a function that returns the absolute value.
Example 5:
a probabilistic power flow deep learning calculation method based on a physical model, mainly as in embodiment 2, wherein the method for establishing the probabilistic power flow deep learning calculation model mainly comprises the following steps:
1) and establishing power flow probability equations of the power system, which are respectively shown in equations 29 to 34.
Figure BDA0002048814550000195
In the formula, ViIs the voltage amplitude of the bus i. ThetaijIs the voltage phase angle difference between bus i and bus j. GijAnd BijAre respectively the iththA bus and a jth busthConductance and susceptance between the buses.
Figure BDA0002048814550000196
Figure BDA0002048814550000197
Figure BDA0002048814550000198
Figure BDA0002048814550000201
Figure BDA0002048814550000202
2) And removing voltage amplitude data in the flow input data of the deep neural network.
3) Removing phase angle data in the reactive power input data of the deep neural network.
4) Based on the steps 1 to 3, a probabilistic load flow deep learning calculation model is established, and the method mainly comprises the following steps:
4.1) determining the weight of the probability trend deep learning calculation model, namely:
Figure BDA0002048814550000203
wherein d is1,θAnd difference equation d (L) are respectively as follows:
Figure BDA0002048814550000204
d(L)=d1。 (37)
4.2) establishing a probabilistic power flow deep learning calculation model, namely:
Figure BDA0002048814550000205
example 6:
a method for verifying probability load flow deep learning calculation based on a physical model mainly comprises the following steps:
1) in this embodiment, simulation was performed using IEEE30, an IEEE118 standard system, and a 661-node system. A Monte-Carlo simulation method of a Newton-Raphson algorithm is used as a reference of the probability power flow.
2) The following methods were compared to verify the effectiveness of the method disclosed in example 1.
M1: DNN of the designed learning method was applied.
M2 introduction of the proposed model-based general deep learning method on the basis of the M1 method.
M3: on the basis of M2. However, the guidance of the voltage amplitude is removed.
M4: on the basis of M3. However, the guidance of the reactive power phase angle is removed.
The above method has the same hyper-parameters under each calculation. The hyper-parameters and the number of training samples of the deep neural network under different examples are shown in table 1. The number of validation samples and test samples was 10000. If the deep neural network satisfies the early stopping method or the number of iteration rounds reaches a threshold, the training process is stopped. All simulations were performed on a PC equipped with Intel (R) core (TM) i7-7500U CPU @2.70GHz 32GB RAM.
TABLE 1 hyper-parameter settings of deep neural networks under different examples
Examples of the design Hidden layer Number of training samples
Case 30 [100 100 100] 10000
Case 118 [200 200 200] 20000
Case 661 [500 500 500 500 500] 70000
3) Validation of the proposed method:
table 2 comparison of performance of M1 and M2 under the same iteration round
Figure BDA0002048814550000211
Table 3 compares the performance of M1 and M2 when meeting the accuracy requirements
Figure BDA0002048814550000212
Tables 2 and 3 are used to verify the validity of the proposed model-based general deep learning method for probabilistic power flow problems.
When the number of iterations is fixed, it can be seen from table 2 that the proposed method M2 can make all the indicators meet the accuracy requirement (< 5%), while one or two accuracy indicators cannot be met by the method M1.
From the results shown in table 3, it can be seen that N of M1 can be converted by the proposed method M2 on the premise that the accuracy requirement is metepochThe reductions of 68.7%, 71.7% and 61.3% were found in three cases, Case30, Case118 and Case661, respectively. Further, it is noted that, in case661, P is caused to be P by the M1 methodpfIncreases with the number of iteration rounds between tables 1 and 2. This phenomenon may be referred to as "overfitting". Without the guidance of the physical model, the accuracy of the branch power cannot be guaranteed even if the values of most node voltages are well approximated by M1. Further, P calculated from M2 in Case661pfThe value of (A) was 3.4%, which is an improvement of only 1.6% over 5%. However, it is almost impossible to achieve a 1.6% improvement in DNN by M1.
In conclusion, the method realizes perfect combination of the physical model and the data-driven deep learning technology. The proposed method can significantly accelerate the convergence efficiency and can reduce or prevent overfitting of the deep neural network to the node voltages.
TABLE 4 comparison of Performance between M1 and M3 when accuracy requirements are met
Figure BDA0002048814550000221
Table 4 shows a comparison of the properties between M1 and M3. It can be observed that method M3 still has a greater advantage than M1. Comparing table 4 and table 3, it can be observed that the number of iterations in M3 that meet the accuracy requirements is less than in cases 118 and 661 (indicated by arrows) for M2. In Case30, the number of iteration rounds is rising from 507 to 576, which is absolutely tolerable in the Case of small scales. Therefore, it is effective to remove the guidance of the voltage amplitude.
TABLE 5 comparison of Performance between M1 and M4 when accuracy requirements are met
Figure BDA0002048814550000222
The phase angle response guidance was further eliminated and the numerical simulation results are shown in table 5. As can be seen from Table 5, N of M4 is compared with M1epochThe decrease or increase tendency of (c) is the same as that of (M3). However, in Case661, N of M4 was compared to M3epochFurther reduction; and N of M1epochThe reduction can be 74.1%. In Case661, the phase angle guidance can be significantly reduced by removing the reactive power.
In summary, it is reasonable to remove the guiding of the voltage amplitude and the guiding of the reactive power of the phase angle, and compared with the general deep learning method based on the model, the simplified deep learning method based on the model can obtain a comparable result, even better.
Calculating time comparison:
TABLE 6 comparison of computation time for each iteration under different methods
Cases tM1(s) tM2(s) tM3(s) tM4(s)
Case 30 0.13 0.26 0.21 0.20
Case 118 0.68 1.64 1.25 1.09
Case 661 35.21 51.53 45.07 43.18
Table 6 shows the computation time for each iteration using different methods. In all cases, the simplified methods M3 and M4 take less time than the general deep learning method M2 based on the basic model. In the Case30 and Case118 standard examples, there was not much difference in the calculation time using the different methods. However, for a large practical power system, the difference is very significant.
For a real 661 bus system, method M3 may reduce the computation time per iteration by 6.64 seconds compared to M2 by removing the steering of the voltage amplitude. By removing the reactive guidance of the phase angle M4, the calculation time of M2 can be further reduced by 8.35 seconds.
In conclusion, the proposed model-based simplified deep learning method M4 can significantly reduce the computational stress while maintaining high performance, compared to the model-based general deep learning method M2.
In conclusion, the invention realizes the effective combination of the physical model and the data-driven deep learning technology, and provides the model-based deep learning simplification method according to the physical characteristics of the power transmission network of the power system, and the advantage of the effective combination of the model drive can be calculated in a light weight manner. The results of the simulation analysis also verify the accuracy and validity of the proposed method. Therefore, the method can provide technical support for high-precision and rapid calculation of the probability load flow of the power system.

Claims (2)

1. The probability power flow deep learning calculation method based on the physical model is characterized by mainly comprising the following steps of:
1) acquiring power system data;
2) establishing a loss function of the probabilistic load flow analysis deep neural network, and updating a parameter theta of the deep neural network;
the method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
Figure FDA0002521574540000011
where m is the number of training samples per training round, L is the number of layers, Y isoutIs an output characteristic vector of the power system probability power flow; xinIs an input feature vector of the power system probability power flow;
Figure FDA0002521574540000012
representing a first layer encoding function;
Figure FDA0002521574540000013
representing the L th layer encoding function, loss representing the loss function;
wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure FDA0002521574540000014
As follows:
Figure FDA0002521574540000015
in the formula, RiIs an activation function for layer i neurons; weight matrix wiIs ni+1×niA matrix; partial vector biIs ni+1A dimension vector; n isiIs the iththThe number of neurons in a layer; x is the input of the coding function;
when i is L, the i-th layer coding function
Figure FDA0002521574540000016
As follows:
Figure FDA0002521574540000017
in the formula, RLAs a function of the activation of the layer L neurons, a weight matrix wLIs nL+1×nLA matrix; partial vector bLIs nL+1A dimension vector; n isLIs L ththThe number of neurons in a layer;
when i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
Figure FDA0002521574540000018
in the formula, x is input of a neuron, namely input data of a power system;
when i is L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x; (5)
2.2) preprocessing input data and output data of the power system probability power flow, namely:
Figure FDA0002521574540000019
in the formula, voutInput data or output data vectors representing the preprocessed power system probabilistic power flow; v represents the original input data or output data vector of the power system probability power flow; v. ofmeanAnd vstdMean and standard deviation of the vector v, respectively;
2.3) updating the deep neural network parameter theta based on the target function loss, namely:
Figure FDA0002521574540000021
in the formula (I), the compound is shown in the specification,
Figure FDA00025215745400000212
is the target function loss at t-1thPartial derivative of theta variable in variation, ⊙ Hamiltonian, r attenuation factor, rho sum constant, η learning rate of neural network, ▽θloss is the objective function loss at tthPartial derivatives of the theta variable as it changes;
3) the deep neural network is used for deep learning of the probability trend of the power system, and the method mainly comprises the following steps:
3.1) determining the probability power flow of the power system, namely establishing a reactive power equation and an active power equation of the probability power flow of the power system;
wherein, the ith in the power systemthBus to jththBranch active power P of busijAs follows:
Figure FDA0002521574540000022
in the formula, ViIs the voltage amplitude of bus i; thetaijThe voltage phase angle difference between bus i and bus j; gijAnd BijAre respectively the iththA bus and a jth busthConductance and susceptance between the individual buses; vjIs the voltage amplitude of bus j;
ith in power systemthBus to jththBranch reactive power Q of busijAs follows:
Figure FDA0002521574540000023
3.2) determining a training target loss by taking the node voltage of the power system as an output vectornewNamely:
Figure FDA0002521574540000024
in the formula (I), the compound is shown in the specification,
Figure FDA0002521574540000025
an active power equation of a branch of the power system;
Figure FDA0002521574540000026
a reactive power equation of a branch of the power system; lossnewIs an updated loss function;
active power equation of branch of electric power system
Figure FDA0002521574540000027
As follows:
Figure FDA0002521574540000028
in the formula,
Figure FDA0002521574540000029
For the deep neural network pair PoutAn estimated value of (d); poutThe method comprises the steps of (1) providing standard power system branch active power; | | represents a norm;
Figure FDA00025215745400000210
in the formula (I), the compound is shown in the specification,
Figure FDA00025215745400000211
for the deep neural network pair QoutAn estimated value of (d); qoutThe branch reactive power of the standardized electric power system is obtained;
3.3) update the weight w with back propagation, i.e.:
w(i,T+1)=w(i,T)-Δw(i,T); (13)
in the formula,. DELTA.w(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thAn amount of change in the weight matrix of the layer; w is a(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer; w is a(i,T+1)Is the (T +1)thWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer;
t ththWhen each parameter is updated, from the iththLayer to i +1thAmount of change Δ w of weight matrix of layer(i,T)As follows:
Figure FDA0002521574540000031
in the formula, R(i,T)Is the T ththWhen the secondary weight is updated iteratively, the learning rate attenuates the variable;
R(i,T)=ρ*R(i,T-1)+(1-ρ)*dw(i,T)⊙dw(i,T); (15)
in the formula, dw(i,T)Is the weight change amount; r(i,T-1)Is the first (T-1)thWhen the secondary weight is updated iteratively, the learning rate attenuates the variable;
weight change amount dw(i,T)As follows:
Figure FDA0002521574540000032
wherein r is the serial number of the initial sample in the batch, and m is the sample size of the batch; k is an arbitrary sample sequence;
3.4) update bias b with back propagation, i.e.:
b(i,T+1)=b(i,T)-Δb(i,T); (17)
in the formula,. DELTA.b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thAn amount of change in the bias matrix of the layer; b(i,T)Is the T ththWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers; b(i,T+1)Is the (T +1)thWhen each parameter is updated, from the iththLayer to i +1thBiasing of the layers;
t ththWhen each parameter is updated, from the iththLayer to (i +1)thAmount of change Δ b of bias matrix of layer(i,T)As follows:
Figure FDA0002521574540000033
in the formula (II), R'(i,T)Is the T ththWhen the secondary bias is updated in an iterative manner, the learning rate attenuates the variable;
R'(i,T)=ρ*R'(i,T-1)+(1-ρ)*db(i,T)⊙db(i,T); (19)
in the formula db(i,T)Is a bias change amount; r'(i,T-1)Is the T-1thWhen the secondary bias is updated in an iterative manner, the learning rate attenuates the variable;
bias change db(i,T)As follows:
Figure FDA0002521574540000041
3.5) determining the original loss function loss and the updated loss function lossnewThe difference equation d (L) of (c) is as follows:
d(L)=d1+d2+d3; (21)
in the formula (d)1、d2And d3Is a difference equation;
wherein, the iththThe layer difference equation d (i) is shown below:
d(i)=d(i+1)w(i,T-1)⊙max(0,yi); (22)
wherein d (i +1) is the (i +1) ththA layer difference equation; max (×) represents taking the maximum value; y isiOutputting data for the deep neural network; w is a(i,T-1)Is the first (T-1)thWhen each parameter is updated, from the iththLayer to i +1thThe weight of the layer;
based on the formula (21) and the formula (22), the weight change amount dw(i,T)Satisfies the following formula:
dw(i,T)=d(i)Tyi/m; (23)
in the formula, superscript T represents transposition;
difference parameter d1A difference parameter d2And a difference parameter d3Respectively as follows:
Figure FDA0002521574540000042
Figure FDA0002521574540000043
Figure FDA0002521574540000044
wherein Y is the probability trendThe output vector of (1);
Figure FDA0002521574540000045
is the output of the deep neural network; y is
Figure FDA0002521574540000046
The denormalized value of (a); p is the active power of the power system;
Figure FDA0002521574540000051
an estimated value of the active power of the deep neural network to the power system; q is reactive power of the power system;
Figure FDA0002521574540000052
an estimate of the reactive power of the electrical power system for the deep neural network;
equation of difference d2And difference equation d3The contribution weights to the output feature vector of the deep neural network are respectively as follows:
Figure FDA0002521574540000053
in the formula (d)θ[L]、d1,θ、d2,θAnd d3,θIs denoted by dθ[L]Equation of difference d1Equation of difference d2And difference equation d3A medium voltage phase angle output vector; dv[L]、d1,v、d2,vAnd d3,vIs denoted by dv[L]Equation of difference d1Equation of difference d2And difference equation d3Medium voltage amplitude output vector d L]Is the equation of difference d2And difference equation d3The total contribution weight of (c); dθ[L]Express difference equation d2And difference equation d3A contribution weight to a voltage phase angle output vector of the deep neural network; dv[L]Express difference equation d2And difference equation d3A contribution weight to a voltage magnitude output vector of the deep neural network;
the empirical value α and the empirical value β are respectively as follows:
Figure FDA0002521574540000054
where max is a function that returns the maximum value and abs is a function that returns the absolute value;
4) the method comprises the following steps of establishing a probabilistic tidal current deep learning calculation model:
4.1) establishing a power flow probability equation derivation formula of the power system, wherein the derivation formula is shown as formulas (29) to (34);
Figure FDA0002521574540000055
in the formula, ViIs the voltage amplitude of bus i; thetaijIs the voltage phase angle difference between bus i and bus j; gijAnd BijAre respectively the iththA bus and a jth busthConductance and susceptance between the individual buses; thetaiIs the voltage phase angle, θ, of the bus ijIs the voltage phase angle of bus j; vjIs the voltage amplitude of bus j;
Figure FDA0002521574540000056
Figure FDA0002521574540000057
Figure FDA0002521574540000061
Figure FDA0002521574540000062
Figure FDA0002521574540000063
4.2) removing the guide of the physical model to the voltage amplitude in the deep neural network learning process;
4.3) removing guidance of reactive power to a voltage phase angle in the deep neural network learning process;
4.4) based on the steps 4.1) to 4.3), establishing a probabilistic power flow deep learning calculation model, which mainly comprises the following steps:
4.4.1) determining the weight of the probability trend deep learning calculation model, namely:
Figure FDA0002521574540000064
wherein the calculation of d (L) is simplified as follows:
Figure FDA0002521574540000065
in the formula (I), the compound is shown in the specification,
Figure FDA0002521574540000066
an estimated value of the parameter theta for the neural network;
d(L)=d1; (37)
wherein d (L) is the equation of difference;
4.4.2) establishing a probabilistic power flow deep learning calculation model: establishing a probabilistic power flow deep learning calculation model based on physical model driving according to the formula (2) to the formula (3), the formula (13) to the formula (15), the formula (17) to the formula (19), the formula (36) to the formula (37);
5) and calculating the probability load flow of the power system to be measured by utilizing the probability load flow deep learning calculation model.
2. The physical model-based probabilistic power flow deep learning calculation method according to claim 1, wherein the power system data mainly comprises wind speed, photovoltaic power and load.
CN201910367856.5A 2019-05-05 2019-05-05 Probability power flow deep learning calculation method based on physical model Active CN109995031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910367856.5A CN109995031B (en) 2019-05-05 2019-05-05 Probability power flow deep learning calculation method based on physical model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910367856.5A CN109995031B (en) 2019-05-05 2019-05-05 Probability power flow deep learning calculation method based on physical model

Publications (2)

Publication Number Publication Date
CN109995031A CN109995031A (en) 2019-07-09
CN109995031B true CN109995031B (en) 2020-07-17

Family

ID=67136040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910367856.5A Active CN109995031B (en) 2019-05-05 2019-05-05 Probability power flow deep learning calculation method based on physical model

Country Status (1)

Country Link
CN (1) CN109995031B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110676852B (en) * 2019-08-26 2020-11-10 重庆大学 Improved extreme learning machine rapid probability load flow calculation method considering load flow characteristics
CN110929989B (en) * 2019-10-29 2023-04-18 重庆大学 N-1 safety checking method with uncertainty based on deep learning
CN111612213B (en) * 2020-04-10 2023-10-10 中国南方电网有限责任公司 Section constraint intelligent early warning method and system based on deep learning
CN112711902A (en) * 2020-12-15 2021-04-27 国网江苏省电力有限公司淮安供电分公司 Power grid voltage calculation method based on Monte Carlo sampling and deep learning
CN113761788A (en) * 2021-07-19 2021-12-07 清华大学 SCOPF rapid calculation method and device based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304623A (en) * 2018-01-15 2018-07-20 重庆大学 A kind of Probabilistic Load Flow on-line calculation method based on storehouse noise reduction autocoder
CN109117951A (en) * 2018-01-15 2019-01-01 重庆大学 Probabilistic Load Flow on-line calculation method based on BP neural network
CN109412161A (en) * 2018-12-18 2019-03-01 国网重庆市电力公司电力科学研究院 A kind of Probabilistic Load calculation method and system
CN109412152A (en) * 2018-11-08 2019-03-01 国电南瑞科技股份有限公司 A kind of grid net loss calculation method based on deep learning Yu elastic network(s) regularization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10359454B2 (en) * 2016-10-03 2019-07-23 Schneider Electric USA, Inc. Method for accurately determining power usage in a two phase circuit having both two phase and single phase loads

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304623A (en) * 2018-01-15 2018-07-20 重庆大学 A kind of Probabilistic Load Flow on-line calculation method based on storehouse noise reduction autocoder
CN109117951A (en) * 2018-01-15 2019-01-01 重庆大学 Probabilistic Load Flow on-line calculation method based on BP neural network
CN109412152A (en) * 2018-11-08 2019-03-01 国电南瑞科技股份有限公司 A kind of grid net loss calculation method based on deep learning Yu elastic network(s) regularization
CN109412161A (en) * 2018-12-18 2019-03-01 国网重庆市电力公司电力科学研究院 A kind of Probabilistic Load calculation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An adaptive importance sampling method for probabilistic optimal power flow;Jie Huang etal.;《2011 IEEE Power and Energy Society General Meeting》;20111231;第1-6页 *
电力系统概率潮流算法综述;刘宇 等;《电力系统自动化》;20141210;第38卷(第23期);第127-135页 *

Also Published As

Publication number Publication date
CN109995031A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109995031B (en) Probability power flow deep learning calculation method based on physical model
CN110110434B (en) Initialization method for probability load flow deep neural network calculation
CN106600059B (en) Intelligent power grid short-term load prediction method based on improved RBF neural network
Gabrié et al. Training Restricted Boltzmann Machine via the Thouless-Anderson-Palmer free energy
Villani Bayesian reference analysis of cointegration
CN109599872B (en) Power system probability load flow calculation method based on stack noise reduction automatic encoder
CN109523155B (en) Power grid risk assessment method of Monte Carlo and least square support vector machine
CN109088407B (en) Power distribution network state estimation method based on deep belief network pseudo-measurement modeling
CN108879732B (en) Transient stability evaluation method and device for power system
CN109412161B (en) Power system probability load flow calculation method and system
CN113553755B (en) Power system state estimation method, device and equipment
CN110045606B (en) Increment space-time learning method for online modeling of distributed parameter system
CN110808581B (en) Active power distribution network power quality prediction method based on DBN-SVM
CN111369045A (en) Method for predicting short-term photovoltaic power generation power
Chinnathambi et al. Deep neural networks (DNN) for day-ahead electricity price markets
CN111898825A (en) Photovoltaic power generation power short-term prediction method and device
Akimoto et al. CMA-ES and advanced adaptation mechanisms
CN111460001A (en) Theoretical line loss rate evaluation method and system for power distribution network
CN113406503A (en) Lithium battery SOH online estimation method based on deep neural network
CN115169742A (en) Short-term wind power generation power prediction method
CN110471768A (en) A kind of load predicting method based on fastPCA-ARIMA
CN110738363A (en) photovoltaic power generation power prediction model and construction method and application thereof
CN110956304A (en) Distributed photovoltaic power generation capacity short-term prediction method based on GA-RBM
CN113919221A (en) Fan load prediction and analysis method and device based on BP neural network and storage medium
CN112149896A (en) Attention mechanism-based mechanical equipment multi-working-condition fault prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant