CN110110434B - Initialization method for probability load flow deep neural network calculation - Google Patents
Initialization method for probability load flow deep neural network calculation Download PDFInfo
- Publication number
- CN110110434B CN110110434B CN201910367846.1A CN201910367846A CN110110434B CN 110110434 B CN110110434 B CN 110110434B CN 201910367846 A CN201910367846 A CN 201910367846A CN 110110434 B CN110110434 B CN 110110434B
- Authority
- CN
- China
- Prior art keywords
- vector
- neural network
- parameter
- probability
- power flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 62
- 238000011423 initialization method Methods 0.000 title claims abstract description 23
- 238000004364 calculation method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 39
- 238000005206 flow analysis Methods 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 99
- 230000006870 function Effects 0.000 claims description 74
- 230000004913 activation Effects 0.000 claims description 37
- 239000011159 matrix material Substances 0.000 claims description 26
- 210000002569 neuron Anatomy 0.000 claims description 26
- 238000012549 training Methods 0.000 claims description 18
- 150000001875 compounds Chemical class 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 4
- 108010046685 Rho Factor Proteins 0.000 claims 1
- 238000004458 analytical method Methods 0.000 abstract description 6
- 230000035699 permeability Effects 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000009412 basement excavation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013076 uncertainty analysis Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0637—Strategic management or analysis, e.g. setting a goal or target of an organisation; Planning actions based on goals; Analysis or evaluation of effectiveness of goals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Human Resources & Organizations (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Entrepreneurship & Innovation (AREA)
- Educational Administration (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Business, Economics & Management (AREA)
- Computing Systems (AREA)
- Tourism & Hospitality (AREA)
- Primary Health Care (AREA)
- Water Supply & Treatment (AREA)
- Development Economics (AREA)
- Public Health (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Feedback Control In General (AREA)
Abstract
The invention discloses an initialization method for probability tide depth neural network calculation, which mainly comprises the following steps: 1) acquiring power system data; 2) establishing a loss function of the probabilistic load flow analysis deep neural network, and updating a parameter theta of the deep neural network; 3) initializing parameters of the probability power flow model; 4) analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network; the method can be widely applied to the probability load flow solution of the power system, and is particularly suitable for the online analysis condition of system uncertainty enhancement caused by high permeability of new energy.
Description
Technical Field
The invention relates to the field of power systems and automation thereof, in particular to an initialization method for probability tide deep neural network calculation.
Background
In recent years, renewable power generation has rapidly developed on a global scale. Not to be neglected, the uncertainty of the power system increases dramatically with the massive access of intermittent renewable energy sources. The rapid increase of uncertainty can have great influence on various departments of the power system, and threatens the safe and stable operation of the power grid. The probabilistic power flow is an important tool for uncertainty analysis of the power system, various random factors can be fully considered, and comprehensive and important reference information is provided for planning and operating the power system. However, the probability load flow involves a large number of high-dimensional complex nonlinear equations, and the existing solving algorithm is difficult to effectively balance the calculation cost and the calculation precision of the probability load flow. Therefore, efficient solutions to probabilistic power flows have become an urgent problem in high-proportion renewable energy power systems.
Disclosure of Invention
The present invention is directed to solving the problems of the prior art.
The technical scheme adopted for achieving the purpose of the invention is that the initialization method for the probabilistic tidal current deep neural network calculation mainly comprises the following steps:
1) power system data is acquired.
The power system data mainly includes wind speed, photovoltaic power and load.
2) And establishing a loss function of the probabilistic power flow analysis deep neural network, and updating a parameter theta of the deep neural network.
The method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
where m is the number of training samples per training round. L is the number of layers. Y isoutIs the output characteristic vector of the power system probability power flow. XinIs an input feature vector of the power system probability power flow.Representing the first layer encoding function.Representing the L-th layer coding function. loss represents a loss function.
in the formula, RiIs a function of activation of layer i neurons. Weight ofMatrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function.
in the formula, RLIs a function of activation of layer L neurons. Weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs the L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2.2) preprocessing input data and output data of the power system probability power flow, namely:
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
2.3) updating the coding parameter theta and the parameter r of the deep neural network based on the target function loss, namely:
in the formula (I), the compound is shown in the specification,for the objective function loss at tthThe partial derivative of the theta variable when changing ⊙ is the hamiltonian r is the attenuation factor p and is a constant η is the learning rate of the neural network.
3) And initializing the parameters of the probability power flow model. The parameters mainly comprise a weight parameter w and a bias parameter b.
The main steps for initializing the parameters of the probabilistic power flow model are as follows:
3.1) carrying out forward propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi。 (8)
in the formula, wiIs a weight matrix. biIs a bias matrix.
yi=Ri(zi-1)。 (9)
Activation vector yiAre independent of each other. Vector of parameters ziAre independent of each other. Activation vector yiAnd the parameter vector are independent of each other.
3.1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
in the formula, yi,ziAnd wiRepresents an activation vector yiA parameter vector ziAnd a weight matrixwiA random variable for each element in (1). n isiThe number of neurons in the ith layer of the neural network.To activate vector yiThe expectation is that.
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
in the formula, the parameter vector zi-1Has zero mean and is distributed symmetrically.
Vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
wherein the parameter vector zL-1Variance of (Var [ z ]L-1]Satisfies the following formula:
when i is 1, the parameter vector z1As follows:
z1=w1y0。 (15)
in the formula, y0To input data.
When i is 1, the weight w1Variance of (Var [ w ]1]As follows:
3.1.3) combining equations 8 to 16, the forward propagation time weight wiVariance of (Var [ w ]i]As follows:
3.2) carrying out back propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, which are respectively shown as a formula 18 and a formula 19.
In the formula, T is a transfer function.
in the formula, when wiWhen the distribution is symmetrical with respect to 0,the mean value of all layers is zero. Weight wiAnd parametersAre independent of each other.
3.2.3) weight w in reverse propagationiVariance of (Var [ w ]i]As follows:
3.4) initializing the weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0. The bias parameter b of the probabilistic power flow model is initialized to 0.
The weight parameter w satisfies the following equation:
the number of neurons in each layer in the probability trend model is the same.
4) And analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
It is worth to be noted that the learning strategy of the probability trend is determined from the four aspects of the target function, the activation function, the parameter initialization method and the learning algorithm, and the effective excavation of the deep neural network on the complex nonlinear characteristics of the probability trend is realized. And then, under the condition of deducing a corresponding learning strategy, a deep neural network parameter initialization method is adopted, so that the learning efficiency of the deep neural network on the probability trend is improved.
The technical effect of the present invention is undoubted. The random initialization method provided by the invention can reach the convergence condition more quickly under the same experimental condition; under the same iteration turns, higher convergence accuracy can be achieved. Therefore, the method provided by the invention can realize that the learning efficiency of the probabilistic tidal current deep neural network is obviously improved without increasing any calculation cost.
The method can be widely applied to the probability load flow solution of the power system, and is particularly suitable for the online analysis condition of system uncertainty enhancement caused by high permeability of new energy.
Drawings
FIG. 1 is a process flow diagram;
Detailed Description
The present invention is further illustrated by the following examples, but it should not be construed that the scope of the above-described subject matter is limited to the following examples. Various substitutions and alterations can be made without departing from the technical idea of the invention and the scope of the invention is covered by the present invention according to the common technical knowledge and the conventional means in the field.
Example 1:
referring to fig. 1, an initialization method for probabilistic tidal current deep neural network calculation mainly includes the following steps:
1) power system data is acquired.
The power system data mainly includes wind speed, photovoltaic power and load.
2) And establishing a loss function of the probability trend analysis deep neural network, and updating a coding parameter theta of the deep neural network.
The method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
where m is the number of training samples per training round. L is the number of layers. Y isoutIs the output characteristic vector of the power system probability power flow. XinIs an input feature vector of the power system probability power flow.Representing the first layer encoding function.Representing the L-th layer coding function. loss represents a loss function.
in the formula, RiIs a function of activation of layer i neurons. Weight matrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function. X is the input to the encoding function. When i is 1, X is Xin. When the value of i is 2, the ratio of i to i is,and so on.
in the formula, RLBeing layer L neuronsThe function is activated. Weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs the L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2.2) in order to improve the training efficiency of DNN, the input data and the output data of PPF should be preprocessed to eliminate the adverse effect of singular samples and numerical problems on the training process. Outliers can be efficiently processed by normalizing the samples using the z-score method and only the mean and standard deviation of the historical statistics are required. Furthermore, it can preserve the distribution characteristics more efficiently than other pre-processing methods (such as the min-max method).
Preprocessing input data and output data of the power system probability power flow, namely:
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
2.3) this example uses the RMSProp method as the learning algorithm. It divides the training samples into several batches. Each batch of samples is trained to update parameters in turn. The RMSProp method adaptively updates the learning rate for each parameter by keeping a moving average of the gradient squared, reducing the training burden and avoiding local minimization. The deep neural network parameters are updated by the RMSProp algorithm.
Based on the objective function loss, the encoding parameter θ and the parameter r of the deep neural network are updated, that is:
in the formula (I), the compound is shown in the specification,for the objective function loss at tthThe partial derivative of the theta variable over time ⊙ is the Hamiltonian r is the attenuation factor rho and is a constant η is the learning rate of the neural network ▽θloss is the objective function loss at tthPartial derivative of the theta variable when changing. ThetatIs at the tthUpdated parameter, θt-1Is the t-1thUpdated parameter, rtIs at the tthUpdated parameter, rt-1Is the t-1thThe updated parameters.
3) And initializing the parameters of the probability power flow model. The parameters mainly comprise a weight parameter w and a bias parameter b.
The main steps for initializing the parameters of the probabilistic power flow model are as follows:
3.1) carrying out forward propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi。 (8)
in the formula, wiIs a weight matrix. biIs a bias matrix.
yi=Ri(zi-1)。 (9)
Activation vector yiAre independent of each other. Vector of parameters ziAre independent of each other. Activation vector yiAnd the parameter vector are independent of each other.
3.1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
in the formula, yi,ziAnd wiRepresents an activation vector yiA parameter vector ziAnd a weight matrix wiA random variable for each element in (1). n isiThe number of neurons in the ith layer of the neural network.To activate vector yiThe expectation is that.
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
in the formula, the parameter vector zi-1Has zero mean and is distributed symmetrically.
Vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
a proper initialization method should avoid exponentially reducing or amplifying the amplitude of the input signal. Equation 13 requires the use of an appropriate scalar. Thus, the parameter vector zL-1Variance of (Var [ z ]L-1]The following equation needs to be satisfied:
when i is 1, the parameter vector z1As follows:
z1=w1y0。 (15)
in the formula, y0To input data.
When i is 1, the weight w1Variance of (Var [ w ]1]As follows:
3.1.3) combining equations 8 to 16, the forward propagation time weight wiVariance of (Var [ w ]i]As follows:
3.2) carrying out back propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, which are respectively shown as a formula 18 and a formula 19.
In the formula, T is a transfer function.
in the formula, when wiWhen symmetrically distributed about 0, the relation parameterThe mean value of all layers is zero. Weight wiAnd relation parameterAre independent of each other. The remaining activation functions, except the last layer, are all relus.
3.2.3) weight w in reverse propagation assuming that the gradient is not a sufficient condition for the exponent to be ever large/smalliVariance of (Var [ w ]i]As follows:
3.3) simultaneous equations 17 and 22, weight wiVariance Var [ w'i']As follows:
3.4) initializing the weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0. The bias parameter b of the probabilistic power flow model is initialized to 0.
The weight parameter w satisfies the following equation:
the number of neurons in each layer in the probability trend model is the same. Std [ w ]i]Is the standard deviation of the weight parameter w.
4) And analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
Example 2:
an initialization method for probability tide depth neural network calculation mainly comprises the following steps:
1) power system data is acquired.
2) And establishing a loss function of the probabilistic power flow analysis deep neural network, and updating a parameter theta of the deep neural network.
3) And initializing the parameters of the probability power flow model. The parameters mainly comprise a weight parameter w and a bias parameter b.
4) And analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
Example 3:
the main steps of the initialization method for the probabilistic tidal current deep neural network calculation are shown in embodiment 2, wherein the power system data mainly comprise wind speed, photovoltaic power and load. Example 4:
the method for initializing the probability trend deep neural network calculation mainly comprises the following steps of embodiment 2, wherein the method for establishing the loss function of the probability trend analysis deep neural network mainly comprises the following steps of:
1) determining the objective function loss, namely:
where m is the number of training samples per training round. L is the number of layers. Y isoutIs the output characteristic vector of the power system probability power flow. XinIs an input characteristic of the probabilistic power flow of an electric power systemAmount of the compound (A).Representing the first layer encoding function.Representing the L-th layer coding function. loss represents the squared loss function.
in the formula, RiIs a function of activation of layer i neurons. Weight matrix wiIs ni+1×niAnd (4) matrix. Partial vector biIs ni+1A dimension vector. n isiIs the iththThe number of neurons in a layer. X is the input to the encoding function.
in the formula, RLIs a function of activation of layer L neurons. Weight matrix wLIs nL+1×nLAnd (4) matrix. Partial vector bLIs nL+1A dimension vector. n isLIs the L ththThe number of neurons in a layer.
When i is 1, 2, 3 …, L-1, the i-th layer ReLU activation function RiAs follows:
in the formula, x is the input of the neuron, i.e., the input data of the power system.
When i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x。 (5)
2) in order to improve the training efficiency of DNN, the input data and the output data of PPF should be preprocessed to eliminate the adverse effect of singular samples and numerical problems on the training process. Outliers can be efficiently processed by normalizing the samples using the z-score method and only the mean and standard deviation of the historical statistics are required. Furthermore, it can preserve the distribution characteristics more efficiently than other pre-processing methods (such as the min-max method).
Preprocessing input data and output data of the power system probability power flow, namely:
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow. v represents the raw input data or output data vector of the power system probabilistic power flow. v. ofmeanAnd vstdMean and standard deviation, respectively, of the vector v.
3) The present embodiment adopts the RMSProp method as a learning algorithm. It divides the training samples into several batches. Each batch of samples is trained to update parameters in turn. The RMSProp method adaptively updates the learning rate for each parameter by keeping a moving average of the gradient squared, reducing the training burden and avoiding local minimization. The deep neural network parameters are updated by the RMSProp algorithm.
Based on the objective function loss, the encoding parameter θ and the parameter r of the deep neural network are updated, that is:
in the formula (I), the compound is shown in the specification,for the objective function loss at tthThe partial derivative of the theta variable when changing ⊙ is the hamiltonian r is the attenuation factor rho and is a constant η is the learning rate of the neural network rho 0.99 η 0.001 1 × 10-8。
Example 5:
a method for initializing a probabilistic power flow deep neural network calculation mainly comprises the following steps of embodiment 2, wherein the method for initializing parameters of a probabilistic power flow model mainly comprises the following steps:
1) the method comprises the following steps of carrying out forward propagation on parameters of a probability power flow model:
1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi。 (8)
in the formula, wiIs a weight matrix. biIs a bias matrix.
yi=Ri(zi-1)。 (9)
Activation vector yiAre independent of each other. Vector of parameters ziAre independent of each other. Activation vector yiAnd the parameter vector are independent of each other.
1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
in the formula, yi,ziAnd wiRepresents an activation vector yiA parameter vector ziAnd a weight matrix wiA random variable for each element in (1). n isiTo activate vector yiThe number of the elements in the Chinese character 'Zhongqin'.To activate vector yiThe expectation is that.
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
in the formula, the parameter vector zi-1Has zero mean and is distributed symmetrically.
Vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
a proper initialization method should avoid exponentially reducing or amplifying the amplitude of the input signal. Equation 13 requires the use of an appropriate scalar. Thus, the parameter vector zL-1Variance of (Var [ z ]L-1]The following equation needs to be satisfied:
when i is 1, the parameter vector z1As follows:
z1=w1y0。 (15)
in the formula, y0To input data.
When i is 1, the weight w1Variance of (Var [ w ]1]As follows:
1.3) combining equation 8 to equation 16, forward propagation time-weightWeight wiVariance of (Var [ w ]i]As follows:
2) the method comprises the following steps of performing back propagation on parameters of the probability power flow model:
2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, which are respectively shown as a formula 18 and a formula 19.
In the formula, T is a transfer function.
in the formula, when wiWhen the distribution is symmetrical with respect to 0,the mean value of all layers is zero. Weight wiAnd parametersAre independent of each other. The remaining activation functions, except the last layer, are all relus.
2.3) weight w when propagating in reverse, assuming that the gradient is not a sufficient condition for the exponent to be ever large/smalliVariance Var [ w'i]As follows:
3) simultaneous equations 17 and 22, weight wiVariance of (Var [ wi]As follows:
4) and initializing a weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0. The bias parameter b of the probabilistic power flow model is initialized to 0.
The weight parameter w satisfies the following equation:
the number of neurons in each layer in the probability trend model is the same.
Example 6:
an initialization method for probability tide depth neural network calculation mainly comprises the following steps:
1) the system state is sampled. And randomly extracting random input variables of the system, including wind speed, photovoltaic power and load.
2) Inputting the system operating conditions (random input variables in step 1) and directly mapping the voltage amplitude and phase angle of all unresolved samples by the deep neural network, and directly solving power information by the amplitude phase angle. Iterative computation is not needed in the whole process, so that the computation speed of the probability load flow can be remarkably increased.
3) And (3) calculating and analyzing a probability power flow index based on the result of the step (2), wherein the probability power flow index comprises the average value, the standard deviation and the probability density function of all output variables.
Example 7:
an experiment for verifying an initialization method of probability tide depth neural network calculation mainly comprises the following steps:
1) in this embodiment, simulation is performed using IEEE30, an IEEE118 standard system, and a 661 node system in a certain province. A Monte-Carlo simulation method of a Newton-Raphson algorithm is used as a reference of probability power flow analysis. The following methods are compared, and the effectiveness of the method is verified.
M1: and (4) randomly initializing the parameters of the deep neural network according to a traditional method by applying a designed learning method.
M2: the deep neural network parameters are initialized by the proposed method using the learning method of the design.
The hyper-parameters and the number of training samples of the deep neural network under different examples are shown in table 1. In addition, the number of the verification samples and the test samples was 10000 for all the examples. All simulations were performed on a PC equipped with Intel (R) core (TM) i7-7500U CPU @2.70GHz 32GB RAM.
TABLE 1 hyper-parameter settings of deep neural networks under different examples
Examples of the design | Hidden layer | Number of training samples |
Case 30 | [100 100 100] | 10000 |
Case 118 | [200 200 200] | 20000 |
Case 661 | [500 500 500 500 500] | 70000 |
2) Validation of the proposed method
To compare the performance of the different methods, some indicators were proposed. N is a radical ofepochRepresenting the iteration round; vlossA value representing an objective function; pvmA probability that an absolute error representing a voltage amplitude exceeds 0.0001 p.u.; pvaThe probability that the absolute error of the representative voltage phase angle exceeds 0.01 rad; ppf/PqfRepresenting the probability that the absolute error of active/reactive exceeds 5 MW.
Table 2 shows a comparison of performance between M1 and M2 with the same number of iterations in different cases. It can be seen that the proposed improved initialization method allows the value of the loss function to be reduced more quickly and to achieve better generalization capability in all the examples compared to the conventional initialization method of M1. In addition, in two big examples, Case118 and Case661, the initialization method can make all probability analysis accuracy indexes less than 5%, but the conventional initialization method M1 cannot make all probability analysis accuracy indexes meet the accuracy requirement. In Case118, the probability P that the absolute error of the voltage amplitude calculated by the conventional initialization method M1 exceeds 0.0001vm5.4%, and the probability of the absolute error of active power exceeding 5MW is as high as 8.1%. In particular, in the Case118 example, the value of the loss function can be reduced by the proposed initialization method M2. In Case661, the probability of the active power exceeding 5MW of the conventional method is 6.2%. Therefore, the proposed improved initialization method effectively improves the convergence efficiency of DNN.
TABLE 2 comparison of Performance between M1 and M2 under different examples
Claims (5)
1. An initialization method for probability tide depth neural network calculation is characterized by mainly comprising the following steps:
1) acquiring power system data;
2) establishing a loss function of the probabilistic load flow analysis deep neural network, and updating a parameter theta of the deep neural network;
the method mainly comprises the following steps of establishing a loss function of the probabilistic power flow analysis deep neural network:
2.1) determining the objective function loss, namely:
wherein m is the number of training samples of each training round; l is the number of layers; y isoutIs an output characteristic vector of the power system probability power flow; xinIs an input feature vector of the power system probability power flow;representing a first layer encoding function;representing an L-th layer encoding function; loss represents a loss function;
in the formula (I), the compound is shown in the specification,Riis an activation function for layer i neurons; weight matrix wiIs ni+1×niA matrix; partial vector biIs ni+1A dimension vector; n isiIs the iththThe number of neurons in a layer; x is the input of the coding function;
in the formula, RLIs an activation function for layer L neurons; weight matrix wLIs nL+1×nLA matrix; partial vector bLIs nL+1A dimension vector; n isLIs the L ththThe number of neurons in a layer;
when i is 1, 2, 3 …, L-1, the i-th layer activation function RiAs follows:
in the formula, x is input of a neuron, namely input data of a power system;
when i ═ L, the i-th layer activation function RiAs follows:
Ri(x)=RL(x)=x; (5)
2.2) preprocessing input data and output data of the power system probability power flow, namely:
voutinput data or output data vectors representing the preprocessed power system probabilistic power flow; v represents the original input data or output data vector of the power system probability power flow; v. ofmeanAnd vstdMean and standard deviation of the vector v, respectively;
2.3) updating the parameter theta of the deep neural network based on the target function loss, namely:
in the formula (I), the compound is shown in the specification,for the objective function loss at tthPartial derivative of theta variable during variation, ⊙ Hamiltonian, r attenuation factor, rho sum constant, η learning rate of neural network, thetatIs at the tthThe parameter of secondary update, thetat-1Is the t-1thThe secondary updated parameter, rtIs at the tthA secondary updated parameter;
3) initializing parameters of the probability power flow model; the parameters mainly comprise a weight parameter w and a bias parameter b;
the main steps for initializing the parameters of the probabilistic power flow model are as follows:
3.1) carrying out forward propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.1.1) determining an activation vector yiAnd the probabilistic power flow model is in the iththParameter vector z of layer iterationiNamely:
zi=wiyi+bi; (8)
in the formula, wiIs a weight matrix; biIs a bias matrix;
yi=Ri(zi-1); (9)
3.1.2) calculating the parameter vector ziVariance of (Var [ z ]i]Namely:
in the formula, yi,ziAnd wiIndicating the direction of activationQuantity yiA parameter vector ziAnd a weight matrix wiA random variable for each element in (1); n isiThe number of the neurons of the ith layer of the neural network;to activate vector yi(iii) a desire; var [. alpha. ]]Represents the variance;
wherein E [ ] represents desired;
substituting equation 11 into equation 10, the parameter vector ziVariance of (Var [ z ]i]As follows:
in the formula, the parameter vector zi-1Has zero mean value and is distributed symmetrically;
vector of parameters zL-1Variance of (Var [ z ]L-1]As follows:
wherein the parameter vector zL-1Variance of (Var [ z ]L-1]Satisfies the following formula:
when i is 1, the parameter vector z1As follows:
z1=w1y0; (15)
in the formula, y0Inputting data;
when i is 1, the weight w1Variance of (Var [ w ]1]As follows:
3.1.3) combining equations 8 to 16, the forward propagation time weight wiVariance of (Var [ w ]i]As follows:
3.2) carrying out back propagation on the parameters of the probability power flow model, and mainly comprising the following steps:
3.2.1) establishing a relation equation of the loss function loss and the probability power flow model parameters, wherein the relation equation is respectively shown as a formula 18 and a formula 19;
wherein T is a transfer function;
in the formula, when wiWhen the distribution is symmetrical with respect to 0,the mean of all layers is zero;
3.2.3) weight w in reverse propagationiVariance of (Var [ w ]i]As follows:
3.3) simultaneous equations 17 and 22, weight wiVariance of (Var [ w ]i]As follows:
3.4) initializing a weight parameter w of the probability power flow model based on the Gaussian distribution with the mean value of 0; initializing a bias parameter b of the probability power flow model to be 0;
the weight parameter w satisfies the following equation:
4) and analyzing a loss function and power system data of the deep neural network based on the probability power flow, and establishing a probability power flow model by using the neural network.
2. The method of claim 1, wherein the power system data mainly comprises wind speed, photovoltaic power and load.
3. The method for initializing probabilistic tidal depth neural network (Ne) computation of claim 1, wherein: activation vector yiAre independent of each other; vector of parameters ziAre independent of each other; activation vector yiAnd the parameter vector are independent of each other.
5. The method for initializing probabilistic tidal depth neural network (Ne) computation of claim 1, wherein: the number of neurons in each layer in the probability trend model is the same.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910367846.1A CN110110434B (en) | 2019-05-05 | 2019-05-05 | Initialization method for probability load flow deep neural network calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910367846.1A CN110110434B (en) | 2019-05-05 | 2019-05-05 | Initialization method for probability load flow deep neural network calculation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110110434A CN110110434A (en) | 2019-08-09 |
CN110110434B true CN110110434B (en) | 2020-10-16 |
Family
ID=67488216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910367846.1A Active CN110110434B (en) | 2019-05-05 | 2019-05-05 | Initialization method for probability load flow deep neural network calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110110434B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110676852B (en) * | 2019-08-26 | 2020-11-10 | 重庆大学 | Improved extreme learning machine rapid probability load flow calculation method considering load flow characteristics |
CN110535146B (en) * | 2019-08-27 | 2022-09-23 | 哈尔滨工业大学 | Electric power system reactive power optimization method based on depth determination strategy gradient reinforcement learning |
CN110829434B (en) * | 2019-09-30 | 2021-04-06 | 重庆大学 | Method for improving expansibility of deep neural network tidal current model |
CN110929989B (en) * | 2019-10-29 | 2023-04-18 | 重庆大学 | N-1 safety checking method with uncertainty based on deep learning |
CN112051980B (en) * | 2020-10-13 | 2022-06-21 | 浙江大学 | Non-linear activation function computing device based on Newton iteration method |
CN112632846B (en) * | 2020-11-13 | 2023-10-24 | 国网浙江省电力有限公司绍兴供电公司 | Power transmission section limit probability assessment method of power system and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106058914A (en) * | 2016-05-27 | 2016-10-26 | 国电南瑞科技股份有限公司 | Voltage optimization method of distribution network generation predication technology based on Elman algorithm |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10404067B2 (en) * | 2016-05-09 | 2019-09-03 | Utopus Insights, Inc. | Congestion control in electric power system under load and uncertainty |
CN107732970B (en) * | 2017-11-10 | 2020-03-17 | 国网甘肃省电力公司经济技术研究院 | Static safety probability evaluation method for new energy grid-connected power system |
CN108336739B (en) * | 2018-01-15 | 2021-04-27 | 重庆大学 | RBF neural network-based probability load flow online calculation method |
CN109117951B (en) * | 2018-01-15 | 2021-11-16 | 重庆大学 | BP neural network-based probability load flow online calculation method |
CN108304623B (en) * | 2018-01-15 | 2021-05-04 | 重庆大学 | Probability load flow online calculation method based on stack noise reduction automatic encoder |
CN109412161B (en) * | 2018-12-18 | 2022-09-09 | 国网重庆市电力公司电力科学研究院 | Power system probability load flow calculation method and system |
-
2019
- 2019-05-05 CN CN201910367846.1A patent/CN110110434B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106058914A (en) * | 2016-05-27 | 2016-10-26 | 国电南瑞科技股份有限公司 | Voltage optimization method of distribution network generation predication technology based on Elman algorithm |
Non-Patent Citations (1)
Title |
---|
Probabilistic power flow of correlated hybrid wind-photovoltaic power systems;Morteza Aien 等;《IET Renewable Power Generation》;20140219;第8卷(第6期);649-658 * |
Also Published As
Publication number | Publication date |
---|---|
CN110110434A (en) | 2019-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110110434B (en) | Initialization method for probability load flow deep neural network calculation | |
Gu et al. | Forecasting and uncertainty analysis of day-ahead photovoltaic power using a novel forecasting method | |
CN106600059B (en) | Intelligent power grid short-term load prediction method based on improved RBF neural network | |
CN109995031B (en) | Probability power flow deep learning calculation method based on physical model | |
Niu et al. | Uncertainty modeling for chaotic time series based on optimal multi-input multi-output architecture: Application to offshore wind speed | |
CN109615146B (en) | Ultra-short-term wind power prediction method based on deep learning | |
CN108985515B (en) | New energy output prediction method and system based on independent cyclic neural network | |
CN109599872B (en) | Power system probability load flow calculation method based on stack noise reduction automatic encoder | |
CN111860982A (en) | Wind power plant short-term wind power prediction method based on VMD-FCM-GRU | |
CN105631483A (en) | Method and device for predicting short-term power load | |
CN110942194A (en) | Wind power prediction error interval evaluation method based on TCN | |
CN109412161B (en) | Power system probability load flow calculation method and system | |
CN111091236B (en) | Multi-classification deep learning short-term wind power prediction method classified according to pitch angles | |
CN106779177A (en) | Multiresolution wavelet neutral net electricity demand forecasting method based on particle group optimizing | |
CN111898825A (en) | Photovoltaic power generation power short-term prediction method and device | |
CN111460001A (en) | Theoretical line loss rate evaluation method and system for power distribution network | |
CN111625399A (en) | Method and system for recovering metering data | |
CN106295908A (en) | A kind of SVM wind power forecasting method | |
CN110826611A (en) | Stacking sewage treatment fault diagnosis method based on weighted integration of multiple meta-classifiers | |
CN115275991A (en) | Active power distribution network operation situation prediction method based on IEMD-TA-LSTM model | |
CN112149883A (en) | Photovoltaic power prediction method based on FWA-BP neural network | |
CN111506868B (en) | Ultra-short-term wind speed prediction method based on HHT weight optimization | |
CN114118401A (en) | Neural network-based power distribution network flow prediction method, system, device and storage medium | |
CN113919221A (en) | Fan load prediction and analysis method and device based on BP neural network and storage medium | |
CN110276478B (en) | Short-term wind power prediction method based on segmented ant colony algorithm optimization SVM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |