CN110110434B - Initialization method for probability load flow deep neural network calculation - Google Patents

Initialization method for probability load flow deep neural network calculation Download PDF

Info

Publication number
CN110110434B
CN110110434B CN201910367846.1A CN201910367846A CN110110434B CN 110110434 B CN110110434 B CN 110110434B CN 201910367846 A CN201910367846 A CN 201910367846A CN 110110434 B CN110110434 B CN 110110434B
Authority
CN
China
Prior art keywords
vector
neural network
parameter
probability
power flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910367846.1A
Other languages
Chinese (zh)
Other versions
CN110110434A (en
Inventor
杨知方
杨燕
余娟
代伟
向明旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910367846.1A priority Critical patent/CN110110434B/en
Publication of CN110110434A publication Critical patent/CN110110434A/en
Application granted granted Critical
Publication of CN110110434B publication Critical patent/CN110110434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0637Strategic management or analysis, e.g. setting a goal or target of an organisation; Planning actions based on goals; Analysis or evaluation of effectiveness of goals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Primary Health Care (AREA)
  • Water Supply & Treatment (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses an initialization method for probability tide depth neural network calculation, which mainly comprises the following steps: 1) acquiring power system data; 2) establishing a loss function of the probabilistic load flow analysis deep neural network, and updating a parameter theta of the deep neural network; 3) initializing parameters of the probability power flow model; 4) analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network; the method can be widely applied to the probability load flow solution of the power system, and is particularly suitable for the online analysis condition of system uncertainty enhancement caused by high permeability of new energy.

Description

Initialization method for probability load flow deep neural network calculation
Technical Field
The invention relates to the field of power systems and automation thereof, in particular to an initialization method for probability tide deep neural network calculation.
Background
In recent years, renewable power generation has rapidly developed on a global scale. Not to be neglected, the uncertainty of the power system increases dramatically with the massive access of intermittent renewable energy sources. The rapid increase of uncertainty can have great influence on various departments of the power system, and threatens the safe and stable operation of the power grid. The probabilistic power flow is an important tool for uncertainty analysis of the power system, various random factors can be fully considered, and comprehensive and important reference information is provided for planning and operating the power system. However, the probability load flow involves a large number of high-dimensional complex nonlinear equations, and the existing solving algorithm is difficult to effectively balance the calculation cost and the calculation precision of the probability load flow. Therefore, efficient solutions to probabilistic power flows have become an urgent problem in high-proportion renewable energy power systems.
Disclosure of Invention
The present invention is directed to solving the problems of the prior art.
The technical scheme adopted for achieving the purpose of the invention is that the initialization method for the probabilistic tidal current deep neural network calculation mainly comprises the following steps:
1) power system data is acquired.
The power system data mainly includes wind speed, photovoltaic power and load.
2) And establishing a loss function of the probabilistic power flow analysis deep neural network, and updating a parameter theta of the deep neural network.
The method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
Figure BDA0002048814090000011
where m is the number of training samples per training round. L is the number of layers. Y isoutIs the output characteristic vector of the power system probability power flow. XinIs an input feature vector of the power system probability power flow.
Figure BDA0002048814090000012
Representing the first layer encoding function.
Figure BDA0002048814090000013
Representing the L-th layer coding function. loss represents a loss function.
Wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure BDA0002048814090000014
As follows:
Figure BDA0002048814090000015
in the formula, RiIs a function of activation of layer i neurons. Weight ofMatrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function.
When i ═ L, the i-th layer coding function
Figure BDA0002048814090000021
As follows:
Figure BDA0002048814090000022
in the formula, RLIs a function of activation of layer L neurons. Weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs the L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
Figure BDA0002048814090000023
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2.2) preprocessing input data and output data of the power system probability power flow, namely:
Figure BDA0002048814090000024
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
2.3) updating the coding parameter theta and the parameter r of the deep neural network based on the target function loss, namely:
Figure BDA0002048814090000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814090000032
for the objective function loss at tthThe partial derivative of the theta variable when changing ⊙ is the hamiltonian r is the attenuation factor p and is a constant η is the learning rate of the neural network.
3) And initializing the parameters of the probability power flow model. The parameters mainly comprise a weight parameter w and a bias parameter b.
The main steps for initializing the parameters of the probabilistic power flow model are as follows:
3.1) carrying out forward propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi。 (8)
in the formula, wiIs a weight matrix. biIs a bias matrix.
yi=Ri(zi-1)。 (9)
Activation vector yiAre independent of each other. Vector of parameters ziAre independent of each other. Activation vector yiAnd the parameter vector are independent of each other.
3.1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
Figure BDA0002048814090000033
in the formula, yi,ziAnd wiRepresents an activation vector yiA parameter vector ziAnd a weight matrixwiA random variable for each element in (1). n isiThe number of neurons in the ith layer of the neural network.
Figure BDA0002048814090000034
To activate vector yiThe expectation is that.
Wherein the activation vector yiIs expected to
Figure BDA0002048814090000035
As follows:
Figure BDA0002048814090000036
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
Figure BDA0002048814090000041
in the formula, the parameter vector zi-1Has zero mean and is distributed symmetrically.
Vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
Figure BDA0002048814090000042
wherein the parameter vector zL-1Variance of (Var [ z ]L-1]Satisfies the following formula:
Figure BDA0002048814090000043
when i is 1, the parameter vector z1As follows:
z1=w1y0。 (15)
in the formula, y0To input data.
When i is 1, the weight w1Variance of (Var [ w ]1]As follows:
Figure BDA0002048814090000044
3.1.3) combining equations 8 to 16, the forward propagation time weight wiVariance of (Var [ w ]i]As follows:
Figure BDA0002048814090000045
3.2) carrying out back propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, which are respectively shown as a formula 18 and a formula 19.
Figure BDA0002048814090000046
In the formula, T is a transfer function.
Figure BDA0002048814090000051
3.2.2) parameters
Figure BDA0002048814090000052
Variance of (2)
Figure BDA0002048814090000053
As follows:
Figure BDA0002048814090000054
in the formula, when wiWhen the distribution is symmetrical with respect to 0,
Figure BDA0002048814090000055
the mean value of all layers is zero. Weight wiAnd parameters
Figure BDA0002048814090000056
Are independent of each other.
Wherein the parameters
Figure BDA0002048814090000057
Is expected to
Figure BDA0002048814090000058
As follows:
Figure BDA0002048814090000059
3.2.3) weight w in reverse propagationiVariance of (Var [ w ]i]As follows:
Figure BDA00020488140900000510
3.3) simultaneous equations 17 and 22, weight wiVariance of (2)
Figure BDA00020488140900000511
As follows:
Figure BDA00020488140900000512
3.4) initializing the weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0. The bias parameter b of the probabilistic power flow model is initialized to 0.
The weight parameter w satisfies the following equation:
Figure BDA0002048814090000061
the number of neurons in each layer in the probability trend model is the same.
4) And analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
It is worth to be noted that the learning strategy of the probability trend is determined from the four aspects of the target function, the activation function, the parameter initialization method and the learning algorithm, and the effective excavation of the deep neural network on the complex nonlinear characteristics of the probability trend is realized. And then, under the condition of deducing a corresponding learning strategy, a deep neural network parameter initialization method is adopted, so that the learning efficiency of the deep neural network on the probability trend is improved.
The technical effect of the present invention is undoubted. The random initialization method provided by the invention can reach the convergence condition more quickly under the same experimental condition; under the same iteration turns, higher convergence accuracy can be achieved. Therefore, the method provided by the invention can realize that the learning efficiency of the probabilistic tidal current deep neural network is obviously improved without increasing any calculation cost.
The method can be widely applied to the probability load flow solution of the power system, and is particularly suitable for the online analysis condition of system uncertainty enhancement caused by high permeability of new energy.
Drawings
FIG. 1 is a process flow diagram;
Detailed Description
The present invention is further illustrated by the following examples, but it should not be construed that the scope of the above-described subject matter is limited to the following examples. Various substitutions and alterations can be made without departing from the technical idea of the invention and the scope of the invention is covered by the present invention according to the common technical knowledge and the conventional means in the field.
Example 1:
referring to fig. 1, an initialization method for probabilistic tidal current deep neural network calculation mainly includes the following steps:
1) power system data is acquired.
The power system data mainly includes wind speed, photovoltaic power and load.
2) And establishing a loss function of the probability trend analysis deep neural network, and updating a coding parameter theta of the deep neural network.
The method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
Figure BDA0002048814090000071
where m is the number of training samples per training round. L is the number of layers. Y isoutIs the output characteristic vector of the power system probability power flow. XinIs an input feature vector of the power system probability power flow.
Figure BDA0002048814090000072
Representing the first layer encoding function.
Figure BDA0002048814090000073
Representing the L-th layer coding function. loss represents a loss function.
Wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure BDA0002048814090000074
As follows:
Figure BDA0002048814090000075
in the formula, RiIs a function of activation of layer i neurons. Weight matrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function. X is the input to the encoding function. When i is 1, X is Xin. When the value of i is 2, the ratio of i to i is,
Figure BDA0002048814090000076
and so on.
When i ═ L, the i-th layer coding function
Figure BDA0002048814090000077
As follows:
Figure BDA0002048814090000078
in the formula, RLBeing layer L neuronsThe function is activated. Weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs the L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
Figure BDA0002048814090000079
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2.2) in order to improve the training efficiency of DNN, the input data and the output data of PPF should be preprocessed to eliminate the adverse effect of singular samples and numerical problems on the training process. Outliers can be efficiently processed by normalizing the samples using the z-score method and only the mean and standard deviation of the historical statistics are required. Furthermore, it can preserve the distribution characteristics more efficiently than other pre-processing methods (such as the min-max method).
Preprocessing input data and output data of the power system probability power flow, namely:
Figure BDA0002048814090000081
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
2.3) this example uses the RMSProp method as the learning algorithm. It divides the training samples into several batches. Each batch of samples is trained to update parameters in turn. The RMSProp method adaptively updates the learning rate for each parameter by keeping a moving average of the gradient squared, reducing the training burden and avoiding local minimization. The deep neural network parameters are updated by the RMSProp algorithm.
Based on the objective function loss, the encoding parameter θ and the parameter r of the deep neural network are updated, that is:
Figure BDA0002048814090000082
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814090000083
for the objective function loss at tthThe partial derivative of the theta variable over time ⊙ is the Hamiltonian r is the attenuation factor rho and is a constant η is the learning rate of the neural network ▽θloss is the objective function loss at tthPartial derivative of the theta variable when changing. ThetatIs at the tthUpdated parameter, θt-1Is the t-1thUpdated parameter, rtIs at the tthUpdated parameter, rt-1Is the t-1thThe updated parameters.
3) And initializing the parameters of the probability power flow model. The parameters mainly comprise a weight parameter w and a bias parameter b.
The main steps for initializing the parameters of the probabilistic power flow model are as follows:
3.1) carrying out forward propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi。 (8)
in the formula, wiIs a weight matrix. biIs a bias matrix.
yi=Ri(zi-1)。 (9)
Activation vector yiAre independent of each other. Vector of parameters ziAre independent of each other. Activation vector yiAnd the parameter vector are independent of each other.
3.1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
Figure BDA0002048814090000091
in the formula, yi,ziAnd wiRepresents an activation vector yiA parameter vector ziAnd a weight matrix wiA random variable for each element in (1). n isiThe number of neurons in the ith layer of the neural network.
Figure BDA0002048814090000092
To activate vector yiThe expectation is that.
Wherein the activation vector yiIs expected to
Figure BDA0002048814090000093
As follows:
Figure BDA0002048814090000094
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
Figure BDA0002048814090000095
in the formula, the parameter vector zi-1Has zero mean and is distributed symmetrically.
Vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
Figure BDA0002048814090000101
a proper initialization method should avoid exponentially reducing or amplifying the amplitude of the input signal. Equation 13 requires the use of an appropriate scalar. Thus, the parameter vector zL-1Variance of (Var [ z ]L-1]The following equation needs to be satisfied:
Figure BDA0002048814090000102
when i is 1, the parameter vector z1As follows:
z1=w1y0。 (15)
in the formula, y0To input data.
When i is 1, the weight w1Variance of (Var [ w ]1]As follows:
Figure BDA0002048814090000103
3.1.3) combining equations 8 to 16, the forward propagation time weight wiVariance of (Var [ w ]i]As follows:
Figure BDA0002048814090000104
3.2) carrying out back propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, which are respectively shown as a formula 18 and a formula 19.
Figure BDA0002048814090000105
In the formula, T is a transfer function.
Figure BDA0002048814090000111
3.2.2) relationship parameters
Figure BDA0002048814090000112
Variance of (2)
Figure BDA0002048814090000113
As follows:
Figure BDA0002048814090000114
in the formula, when wiWhen symmetrically distributed about 0, the relation parameter
Figure BDA0002048814090000115
The mean value of all layers is zero. Weight wiAnd relation parameter
Figure BDA0002048814090000116
Are independent of each other. The remaining activation functions, except the last layer, are all relus.
Wherein the relation parameter
Figure BDA0002048814090000117
Is expected to
Figure BDA0002048814090000118
As follows:
Figure BDA0002048814090000119
3.2.3) weight w in reverse propagation assuming that the gradient is not a sufficient condition for the exponent to be ever large/smalliVariance of (Var [ w ]i]As follows:
Figure BDA00020488140900001110
3.3) simultaneous equations 17 and 22, weight wiVariance Var [ w'i']As follows:
Figure BDA00020488140900001111
3.4) initializing the weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0. The bias parameter b of the probabilistic power flow model is initialized to 0.
The weight parameter w satisfies the following equation:
Figure BDA0002048814090000121
the number of neurons in each layer in the probability trend model is the same. Std [ w ]i]Is the standard deviation of the weight parameter w.
4) And analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
Example 2:
an initialization method for probability tide depth neural network calculation mainly comprises the following steps:
1) power system data is acquired.
2) And establishing a loss function of the probabilistic power flow analysis deep neural network, and updating a parameter theta of the deep neural network.
3) And initializing the parameters of the probability power flow model. The parameters mainly comprise a weight parameter w and a bias parameter b.
4) And analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
Example 3:
the main steps of the initialization method for the probabilistic tidal current deep neural network calculation are shown in embodiment 2, wherein the power system data mainly comprise wind speed, photovoltaic power and load. Example 4:
the method for initializing the probability trend deep neural network calculation mainly comprises the following steps of embodiment 2, wherein the method for establishing the loss function of the probability trend analysis deep neural network mainly comprises the following steps of:
1) determining the objective function loss, namely:
Figure BDA0002048814090000122
where m is the number of training samples per training round. L is the number of layers. Y isoutIs the output characteristic vector of the power system probability power flow. XinIs an input characteristic of the probabilistic power flow of an electric power systemAmount of the compound (A).
Figure BDA0002048814090000131
Representing the first layer encoding function.
Figure BDA0002048814090000132
Representing the L-th layer coding function. loss represents the squared loss function.
Wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure BDA0002048814090000133
As follows:
Figure BDA0002048814090000134
in the formula, RiIs a function of activation of layer i neurons. Weight matrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function.
When i ═ L, the i-th layer coding function
Figure BDA0002048814090000135
As follows:
Figure BDA0002048814090000136
in the formula, RLIs a function of activation of layer L neurons. Weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs the L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer ReLU activation function RiAs follows:
Figure BDA0002048814090000137
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2) in order to improve the training efficiency of DNN, the input data and the output data of PPF should be preprocessed to eliminate the adverse effect of singular samples and numerical problems on the training process. Outliers can be efficiently processed by normalizing the samples using the z-score method and only the mean and standard deviation of the historical statistics are required. Furthermore, it can preserve the distribution characteristics more efficiently than other pre-processing methods (such as the min-max method).
Preprocessing input data and output data of the power system probability power flow, namely:
Figure BDA0002048814090000141
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
3) The present embodiment adopts the RMSProp method as a learning algorithm. It divides the training samples into several batches. Each batch of samples is trained to update parameters in turn. The RMSProp method adaptively updates the learning rate for each parameter by keeping a moving average of the gradient squared, reducing the training burden and avoiding local minimization. The deep neural network parameters are updated by the RMSProp algorithm.
Based on the objective function loss, the encoding parameter θ and the parameter r of the deep neural network are updated, that is:
Figure BDA0002048814090000142
in the formula (I), the compound is shown in the specification,
Figure BDA0002048814090000143
for the objective function loss at tthThe partial derivative of the theta variable when changing ⊙ is the hamiltonian r is the attenuation factor rho and is a constant η is the learning rate of the neural network rho 0.99 η 0.001 1 × 10-8
Example 5:
a method for initializing a probabilistic power flow deep neural network calculation mainly comprises the following steps of embodiment 2, wherein the method for initializing parameters of a probabilistic power flow model mainly comprises the following steps:
1) the method comprises the following steps of carrying out forward propagation on parameters of a probability power flow model:
1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi。 (8)
in the formula, wiIs a weight matrix. biIs a bias matrix.
yi=Ri(zi-1)。 (9)
Activation vector yiAre independent of each other. Vector of parameters ziAre independent of each other. Activation vector yiAnd the parameter vector are independent of each other.
1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
Figure BDA0002048814090000151
in the formula, yi,ziAnd wiRepresents an activation vector yiA parameter vector ziAnd a weight matrix wiA random variable for each element in (1). n isiTo activate vector yiThe number of the elements in the Chinese character 'Zhongqin'.
Figure BDA0002048814090000152
To activate vector yiThe expectation is that.
Wherein the activation vector yiIs expected to
Figure BDA0002048814090000153
As follows:
Figure BDA0002048814090000154
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
Figure BDA0002048814090000155
in the formula, the parameter vector zi-1Has zero mean and is distributed symmetrically.
Vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
Figure BDA0002048814090000156
a proper initialization method should avoid exponentially reducing or amplifying the amplitude of the input signal. Equation 13 requires the use of an appropriate scalar. Thus, the parameter vector zL-1Variance of (Var [ z ]L-1]The following equation needs to be satisfied:
Figure BDA0002048814090000157
when i is 1, the parameter vector z1As follows:
z1=w1y0。 (15)
in the formula, y0To input data.
When i is 1, the weight w1Variance of (Var [ w ]1]As follows:
Figure BDA0002048814090000161
1.3) combining equation 8 to equation 16, forward propagation time-weightWeight wiVariance of (Var [ w ]i]As follows:
Figure BDA0002048814090000162
2) the method comprises the following steps of performing back propagation on parameters of the probability power flow model:
2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, which are respectively shown as a formula 18 and a formula 19.
Figure BDA0002048814090000163
In the formula, T is a transfer function.
Figure BDA0002048814090000164
2.2) parameters
Figure BDA0002048814090000165
Variance of (2)
Figure BDA0002048814090000166
As follows:
Figure BDA0002048814090000167
in the formula, when wiWhen the distribution is symmetrical with respect to 0,
Figure BDA0002048814090000168
the mean value of all layers is zero. Weight wiAnd parameters
Figure BDA0002048814090000169
Are independent of each other. The remaining activation functions, except the last layer, are all relus.
Wherein the parameters
Figure BDA00020488140900001610
Is expected to
Figure BDA00020488140900001611
As follows:
Figure BDA0002048814090000171
2.3) weight w when propagating in reverse, assuming that the gradient is not a sufficient condition for the exponent to be ever large/smalliVariance Var [ w'i]As follows:
Figure BDA0002048814090000172
3) simultaneous equations 17 and 22, weight wiVariance of (Var [ wi]As follows:
Figure BDA0002048814090000173
4) and initializing a weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0. The bias parameter b of the probabilistic power flow model is initialized to 0.
The weight parameter w satisfies the following equation:
Figure BDA0002048814090000174
the number of neurons in each layer in the probability trend model is the same.
Example 6:
an initialization method for probability tide depth neural network calculation mainly comprises the following steps:
1) the system state is sampled. And randomly extracting random input variables of the system, including wind speed, photovoltaic power and load.
2) Inputting the system operating conditions (random input variables in step 1) and directly mapping the voltage amplitude and phase angle of all unresolved samples by the deep neural network, and directly solving power information by the amplitude phase angle. Iterative computation is not needed in the whole process, so that the computation speed of the probability load flow can be remarkably increased.
3) And (3) calculating and analyzing a probability power flow index based on the result of the step (2), wherein the probability power flow index comprises the average value, the standard deviation and the probability density function of all output variables.
Example 7:
an experiment for verifying an initialization method of probability tide depth neural network calculation mainly comprises the following steps:
1) in this embodiment, simulation is performed using IEEE30, an IEEE118 standard system, and a 661 node system in a certain province. A Monte-Carlo simulation method of a Newton-Raphson algorithm is used as a reference of probability power flow analysis. The following methods are compared, and the effectiveness of the method is verified.
M1: and (4) randomly initializing the parameters of the deep neural network according to a traditional method by applying a designed learning method.
M2: the deep neural network parameters are initialized by the proposed method using the learning method of the design.
The hyper-parameters and the number of training samples of the deep neural network under different examples are shown in table 1. In addition, the number of the verification samples and the test samples was 10000 for all the examples. All simulations were performed on a PC equipped with Intel (R) core (TM) i7-7500U CPU @2.70GHz 32GB RAM.
TABLE 1 hyper-parameter settings of deep neural networks under different examples
Examples of the design Hidden layer Number of training samples
Case 30 [100 100 100] 10000
Case 118 [200 200 200] 20000
Case 661 [500 500 500 500 500] 70000
2) Validation of the proposed method
To compare the performance of the different methods, some indicators were proposed. N is a radical ofepochRepresenting the iteration round; vlossA value representing an objective function; pvmA probability that an absolute error representing a voltage amplitude exceeds 0.0001 p.u.; pvaThe probability that the absolute error of the representative voltage phase angle exceeds 0.01 rad; ppf/PqfRepresenting the probability that the absolute error of active/reactive exceeds 5 MW.
Table 2 shows a comparison of performance between M1 and M2 with the same number of iterations in different cases. It can be seen that the proposed improved initialization method allows the value of the loss function to be reduced more quickly and to achieve better generalization capability in all the examples compared to the conventional initialization method of M1. In addition, in two big examples, Case118 and Case661, the initialization method can make all probability analysis accuracy indexes less than 5%, but the conventional initialization method M1 cannot make all probability analysis accuracy indexes meet the accuracy requirement. In Case118, the probability P that the absolute error of the voltage amplitude calculated by the conventional initialization method M1 exceeds 0.0001vm5.4%, and the probability of the absolute error of active power exceeding 5MW is as high as 8.1%. In particular, in the Case118 example, the value of the loss function can be reduced by the proposed initialization method M2. In Case661, the probability of the active power exceeding 5MW of the conventional method is 6.2%. Therefore, the proposed improved initialization method effectively improves the convergence efficiency of DNN.
TABLE 2 comparison of Performance between M1 and M2 under different examples
Figure BDA0002048814090000191

Claims (5)

1. An initialization method for probability tide depth neural network calculation is characterized by mainly comprising the following steps:
1) acquiring power system data;
2) establishing a loss function of the probabilistic load flow analysis deep neural network, and updating a parameter theta of the deep neural network;
the method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
Figure FDA0002521599040000011
wherein m is the number of training samples of each training round; l is the number of layers; y isoutIs an output characteristic vector of the power system probability power flow; xinIs an input feature vector of the power system probability power flow;
Figure FDA0002521599040000012
representing a first layer encoding function;
Figure FDA0002521599040000013
representing an L-th layer encoding function; loss represents a loss function;
wherein, when i is 1, 2, 3 …, L-1, the i-th layer coding function
Figure FDA0002521599040000014
As follows:
Figure FDA0002521599040000015
in the formula (I), the compound is shown in the specification,Riis an activation function for layer i neurons; weight matrix wiIs ni+1×niA matrix; partial vector biIs ni+1A dimension vector; n isiIs the iththThe number of neurons in a layer; x is the input of the coding function;
when i ═ L, the i-th layer coding function
Figure FDA0002521599040000016
As follows:
Figure FDA0002521599040000017
in the formula, RLIs an activation function for layer L neurons; weight matrix wLIs nL+1×nLA matrix; partial vector bLIs nL+1A dimension vector; n isLIs the L ththThe number of neurons in a layer;
when i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
Figure FDA0002521599040000018
in the formula, x is input of a neuron, namely input data of a power system;
when i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x; (5)
2.2) preprocessing input data and output data of the power system probability power flow, namely:
Figure FDA0002521599040000021
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow; v represents the original input data or output data vector of the power system probability power flow; v. ofmeanAnd vstdMean and standard deviation of the vector v, respectively;
2.3) updating the parameter theta of the deep neural network based on the target function loss, namely:
Figure FDA0002521599040000022
in the formula (I), the compound is shown in the specification,
Figure FDA0002521599040000023
for the objective function loss at tthPartial derivative of theta variable during variation, ⊙ Hamiltonian, r attenuation factor, rho sum constant, η learning rate of neural network, thetatIs at the tthThe parameter of secondary update, thetat-1Is the t-1thThe secondary updated parameter, rtIs at the tthA secondary updated parameter;
3) initializing parameters of the probability power flow model; the parameters mainly comprise a weight parameter w and a bias parameter b;
the main steps for initializing the parameters of the probabilistic power flow model are as follows:
3.1) carrying out forward propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi; (8)
in the formula, wiIs a weight matrix; biIs a bias matrix;
yi=Ri(zi-1); (9)
3.1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
Figure FDA0002521599040000031
in the formula, yi,ziAnd wiIndicating the direction of activationQuantity yiA parameter vector ziAnd a weight matrix wiA random variable for each element in (1); n isiThe number of the neurons of the ith layer of the neural network;
Figure FDA0002521599040000032
to activate vector yi(iii) a desire; var [. alpha. ]]Represents the variance;
wherein the activation vector yiIs expected to
Figure FDA0002521599040000033
As follows:
Figure FDA0002521599040000034
wherein E [ ] represents desired;
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
Figure FDA0002521599040000035
in the formula, the parameter vector zi-1Has zero mean value and is distributed symmetrically;
vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
Figure FDA0002521599040000036
wherein the parameter vector zL-1Variance of (Var [ z ]L-1]Satisfies the following formula:
Figure FDA0002521599040000037
when i is 1, the parameter vector z1As follows:
z1=w1y0; (15)
in the formula, y0Inputting data;
when i is 1, the weight w1Variance of (Var [ w ]1]As follows:
Figure FDA0002521599040000041
3.1.3) combining equations 8 to 16, the forward propagation time weight wiVariance of (Var [ w ]i]As follows:
Figure FDA0002521599040000042
3.2) carrying out back propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, wherein the relation equation is respectively shown as a formula 18 and a formula 19;
Figure FDA0002521599040000043
wherein T is a transfer function;
Figure FDA0002521599040000044
3.2.2) parameters
Figure FDA0002521599040000045
Variance of (2)
Figure FDA0002521599040000046
As follows:
Figure FDA0002521599040000047
in the formula, when wiWhen the distribution is symmetrical with respect to 0,
Figure FDA0002521599040000048
the mean of all layers is zero;
wherein the parameters
Figure FDA0002521599040000049
Is expected to
Figure FDA00025215990400000410
As follows:
Figure FDA00025215990400000411
3.2.3) weight w in reverse propagationiVariance of (Var [ w ]i]As follows:
Figure FDA0002521599040000051
3.3) simultaneous equations 17 and 22, weight wiVariance of (Var [ w ]i]As follows:
Figure FDA0002521599040000052
3.4) initializing a weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0; initializing a bias parameter b of the probability power flow model to be 0;
the weight parameter w satisfies the following equation:
Figure FDA0002521599040000053
4) and analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
2. The method of claim 1, wherein the power system data mainly comprises wind speed, photovoltaic power and load.
3. The method for initializing probabilistic tidal depth neural network (Ne) computation of claim 1, wherein: activation vector yiAre independent of each other; vector of parameters ziAre independent of each other; activation vector yiAnd the parameter vector are independent of each other.
4. The method for initializing probabilistic tidal depth neural network (Ne) computation of claim 1, wherein: weight wiAnd parameters
Figure FDA0002521599040000054
Are independent of each other.
5. The method for initializing probabilistic tidal depth neural network (Ne) computation of claim 1, wherein: the number of neurons in each layer in the probability trend model is the same.
CN201910367846.1A 2019-05-05 2019-05-05 Initialization method for probability load flow deep neural network calculation Active CN110110434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910367846.1A CN110110434B (en) 2019-05-05 2019-05-05 Initialization method for probability load flow deep neural network calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910367846.1A CN110110434B (en) 2019-05-05 2019-05-05 Initialization method for probability load flow deep neural network calculation

Publications (2)

Publication Number Publication Date
CN110110434A CN110110434A (en) 2019-08-09
CN110110434B true CN110110434B (en) 2020-10-16

Family

ID=67488216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910367846.1A Active CN110110434B (en) 2019-05-05 2019-05-05 Initialization method for probability load flow deep neural network calculation

Country Status (1)

Country Link
CN (1) CN110110434B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110676852B (en) * 2019-08-26 2020-11-10 重庆大学 Improved extreme learning machine rapid probability load flow calculation method considering load flow characteristics
CN110535146B (en) * 2019-08-27 2022-09-23 哈尔滨工业大学 Electric power system reactive power optimization method based on depth determination strategy gradient reinforcement learning
CN110829434B (en) * 2019-09-30 2021-04-06 重庆大学 Method for improving expansibility of deep neural network tidal current model
CN110929989B (en) * 2019-10-29 2023-04-18 重庆大学 N-1 safety checking method with uncertainty based on deep learning
CN112051980B (en) * 2020-10-13 2022-06-21 浙江大学 Non-linear activation function computing device based on Newton iteration method
CN112632846B (en) * 2020-11-13 2023-10-24 国网浙江省电力有限公司绍兴供电公司 Power transmission section limit probability assessment method of power system and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106058914A (en) * 2016-05-27 2016-10-26 国电南瑞科技股份有限公司 Voltage optimization method of distribution network generation predication technology based on Elman algorithm

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10404067B2 (en) * 2016-05-09 2019-09-03 Utopus Insights, Inc. Congestion control in electric power system under load and uncertainty
CN107732970B (en) * 2017-11-10 2020-03-17 国网甘肃省电力公司经济技术研究院 Static safety probability evaluation method for new energy grid-connected power system
CN108336739B (en) * 2018-01-15 2021-04-27 重庆大学 RBF neural network-based probability load flow online calculation method
CN109117951B (en) * 2018-01-15 2021-11-16 重庆大学 BP neural network-based probability load flow online calculation method
CN108304623B (en) * 2018-01-15 2021-05-04 重庆大学 Probability load flow online calculation method based on stack noise reduction automatic encoder
CN109412161B (en) * 2018-12-18 2022-09-09 国网重庆市电力公司电力科学研究院 Power system probability load flow calculation method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106058914A (en) * 2016-05-27 2016-10-26 国电南瑞科技股份有限公司 Voltage optimization method of distribution network generation predication technology based on Elman algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Probabilistic power flow of correlated hybrid wind-photovoltaic power systems;Morteza Aien 等;《IET Renewable Power Generation》;20140219;第8卷(第6期);649-658 *

Also Published As

Publication number Publication date
CN110110434A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN110110434B (en) Initialization method for probability load flow deep neural network calculation
Gu et al. Forecasting and uncertainty analysis of day-ahead photovoltaic power using a novel forecasting method
CN106600059B (en) Intelligent power grid short-term load prediction method based on improved RBF neural network
CN109995031B (en) Probability power flow deep learning calculation method based on physical model
Niu et al. Uncertainty modeling for chaotic time series based on optimal multi-input multi-output architecture: Application to offshore wind speed
CN109615146B (en) Ultra-short-term wind power prediction method based on deep learning
CN108985515B (en) New energy output prediction method and system based on independent cyclic neural network
CN109599872B (en) Power system probability load flow calculation method based on stack noise reduction automatic encoder
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN105631483A (en) Method and device for predicting short-term power load
CN110942194A (en) Wind power prediction error interval evaluation method based on TCN
CN109412161B (en) Power system probability load flow calculation method and system
CN111091236B (en) Multi-classification deep learning short-term wind power prediction method classified according to pitch angles
CN106779177A (en) Multiresolution wavelet neutral net electricity demand forecasting method based on particle group optimizing
CN111898825A (en) Photovoltaic power generation power short-term prediction method and device
CN111460001A (en) Theoretical line loss rate evaluation method and system for power distribution network
CN111625399A (en) Method and system for recovering metering data
CN106295908A (en) A kind of SVM wind power forecasting method
CN110826611A (en) Stacking sewage treatment fault diagnosis method based on weighted integration of multiple meta-classifiers
CN115275991A (en) Active power distribution network operation situation prediction method based on IEMD-TA-LSTM model
CN112149883A (en) Photovoltaic power prediction method based on FWA-BP neural network
CN111506868B (en) Ultra-short-term wind speed prediction method based on HHT weight optimization
CN114118401A (en) Neural network-based power distribution network flow prediction method, system, device and storage medium
CN113919221A (en) Fan load prediction and analysis method and device based on BP neural network and storage medium
CN110276478B (en) Short-term wind power prediction method based on segmented ant colony algorithm optimization SVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant