CN104700205A - Power grid network topology structure changing and parallel compensation device selecting method - Google Patents

Power grid network topology structure changing and parallel compensation device selecting method Download PDF

Info

Publication number
CN104700205A
CN104700205A CN201510072840.3A CN201510072840A CN104700205A CN 104700205 A CN104700205 A CN 104700205A CN 201510072840 A CN201510072840 A CN 201510072840A CN 104700205 A CN104700205 A CN 104700205A
Authority
CN
China
Prior art keywords
centerdot
network
beta
elm
power station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510072840.3A
Other languages
Chinese (zh)
Other versions
CN104700205B (en
Inventor
宋旭东
余南华
徐衍会
周克林
陈辉
张晓平
陈小军
李传健
郑文杰
唐秀朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Guangdong Power Grid Co Ltd filed Critical Electric Power Research Institute of Guangdong Power Grid Co Ltd
Priority to CN201510072840.3A priority Critical patent/CN104700205B/en
Publication of CN104700205A publication Critical patent/CN104700205A/en
Application granted granted Critical
Publication of CN104700205B publication Critical patent/CN104700205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a power grid network topology structure changing and parallel compensation device selecting method. The power grid network topology structure changing and parallel compensation device selecting method includes S1, grading input vectors; S2, simulating according to network structure parameters, load and small hydropower data, finding out the corresponding network structures and compensation modes of different running modes of a power distribution network, and obtaining several groups of training sets (X, Y) and testing sets (X', Y') of an extreme learning machine (ELM); S3, selecting the hidden layer node number combination Ls of ELM and a structural risk minimization rule constant set gamma s, and selecting an RBF function as an excitation function g(x); S4, training ELM, and testing to obtain the optimal L and gamma to obtain an optical network model of the ELM; S5, outputting the switch combination state with the minimum network loss. The power grid network topology structure changing and parallel compensation device selecting method reflects the nonlinear mapping relationship between input variable and output variable and generalization ability by the aid of ELM to build the network topology structure of which the varied load levels, small hydropower generating capacity and voltage meet demands and build the correspondence between parallel compensation modes.

Description

A kind of method changing electricity grid network topological structure and select paralleling compensating device
Technical field
The present invention relates to a kind of method changing electricity grid network topological structure and select paralleling compensating device, especially relate to a kind of method changing network topology structure and choose reasonable paralleling compensating device based on extreme learning machine method (ELM).
Background technology
Small power station is as a kind of cleanliness without any pollution, renewable and have green energy resource that is good ecological and social benefit, and its importance becomes increasingly conspicuous.In the place of hydroelectric resources compared with horn of plenty, the Devoting Major Efforts To Developing of small power station not only contributes to the phenomenon alleviating grid power deficiency, has also driven the development of local economy.After the new century, country proposed the requirement of energy-saving and emission-reduction, scientific development, small power station obtains especially and develops rapidly.
But, the form that small power station generates electricity in a distributed manner generates electricity in power distribution network large-scale grid connection, change traditional power distribution network method of operation, make that electrical network becomes active network from original passive network, unidirectional trend becomes bi-directional current, cause the uneven and voltage fluctuation of voltage's distribiuting; In addition, small power station is under different sale of electricity agreement, and the United Dispatching not by electrical network manages, and its relatively independent operation characteristic shows as " unordered grid-connected " behavior, causes extreme shock to the stable operation of existing power distribution network, brings huge challenge also to major network pressure regulation.The unordered grid-connected Control of Voltage problem of small power station becomes a reality the difficult point run in control and management.Greatly develop the prerequisite of new forms of energy in national requirements under, in the abundant area of small hydropower resources, a large amount of small power station unordered grid-connected, especially needs to realize " in order " and manages.
In power distribution network, install paralleling compensating device can improve voltage problem in network to a certain extent, but for being rich in the power distribution network of small power station, voltage is often that multiple spot is simultaneously out-of-limit, and the out-of-limit degree of each point is different, need, at multiple spot installing paralleling compensating device, to invest larger for improving voltage's distribiuting; In addition, the change of the distribution network voltage of small power station is rich in seasonal.Different season, rainfall was different, and the generated energy of small power station is also different, and each, load light and heavy degree was different in season, and this just causes Various Seasonal voltage out-of-limit degree different.Therefore, small power station unordered grid-connected time, need, according to information such as load, rainfall, small power station's generated energy, network structures, to carry out controlling in real time, flexibly to the voltage of power distribution network.By suitably changing power distribution network network structure, making to realize " ordering " containing small power station's power distribution network, and suitably selecting shunt compensation point and shunt compensation capacity, the voltage problem containing small power station's power distribution network can be improved both economical, easily.
At present, can be realized with closing by the connection of switch containing the flexible change of small power station's power distribution network topological structure and shunt compensation, and when requiring as target with power distribution network each point voltage conforms, conventional mathematical model is difficult to the relation directly set up between the state of power distribution network breaker in middle when rainfall information, small power station's generated energy, load level and voltage conforms require.
On the other hand, the extreme learning machine (ELM) developed from single hidden layer Feedback Neural Network (SLFNs) can reflect the relation between information and optimum network topological structure such as small power station's generated energy, distribution network load level, and this is that the control realizing voltage by the change of power distribution network topological structure provides method.
Described ELM (Extreme Learning Machine) is a kind of single hidden layer Feedback Neural Network SLFNs (the Single-hidden Layer Feed-forward neural Networks) learning machine newly proposed by Nanyang Polytechnics of Singapore professor Huang Guangbin in 2006.ELM ensure network having that structure is simple, pace of learning fast while, Penrose-Moore generalized inverse is utilized to solve network weight, obtain less weight norm, avoid the problems produced based on Gradient Descent learning method, as local minimum iterations too much, the determination etc. of performance index and learning rate, good generalization ability of network performance can be obtained.ELM in order to reflect the nonlinear relationship between distribution network load pattern and power distribution network optimum structure, can obtain application in multiple field.
Research confirms, for the finite set of N number of different instances, a maximum SLFNs with non-linear continuous pump function only needing N number of hidden layer node, just free from errors can approach this N number of example.
Summary of the invention
Technical matters to be solved by this invention, just be to provide a kind of method changing network topology structure and selection paralleling compensating device based on extreme learning machine method, reflect Nonlinear Mapping relation between input variable and output variable and generalization ability by ELM, set up the corresponding relation between network topology structure and shunt compensation mode that the load level of change, small power station's generated energy and voltage conforms require.
Solve the problems of the technologies described above, the technical solution used in the present invention is as follows:
Change a method for electricity grid network topological structure and selection paralleling compensating device, it is characterized in that comprising the following steps:
S1, to each element divided rank of input vector
To the moon load consumption of power distribution network, the moon small power station's generated energy be divided into 7 levels by the percentage that load consumption or small power station's generated energy account for peak value (input vector divided rank can arrange the size of p according to actual needs, 7 grades are divided into) in embodiment, if the sum of small power station and load has n in power distribution network, then the method for operation of power distribution network has p nindividual;
The percentage that load consumption or small power station's generated energy account for peak value be less than or equal to 40% be 1 grade;
The percentage that load consumption or small power station's generated energy account for peak value (40%, 50%] be 2 grades;
The percentage that load consumption or small power station's generated energy account for peak value (50%, 60%] be 3 grades;
The percentage that load consumption or small power station's generated energy account for peak value (60%, 70%] be 4 grades;
The percentage that load consumption or small power station's generated energy account for peak value (70%, 80%] be 5 grades;
The percentage that load consumption or small power station's generated energy account for peak value is 6 grades at (81%, 89%);
The percentage that load consumption or small power station's generated energy account for peak value be more than or equal to 90% be 7 grades;
Described input vector is the input vector of extreme learning machine, that is:
X=[x 1x 2… x m] T
Output vector is the on off state in power distribution network, that is:
Y=[y 1y 2… y n] T
Wherein m is for carrying out the sum of load in voltage-controlled network and small power station, and n is the number of switches in power distribution network, the element y in Y 1, y 2..., y nrepresent with one group of binary data, 0 represents switch opens, and 1 represents that switch closes.
S2, according to network architecture parameters, load and small power station's data, modeling and simulating is carried out to power distribution network, find out network structure corresponding under power distribution network different running method and compensation way, draw many groups training set (X of extreme learning machine, Y) with test set (X ', Y '), (be all prior art, the acquisition of training set and test set is obtained by other simulation softwares) X, X ' represents that load level and small power station's generated energy of power distribution network, Y, Y ' are on off state corresponding with it;
S3, selects the hidden layer nodes combination L of ELM s, structural risk minimization regularization term constant set γ s, excitation function g (x) selects RBF function:
(mathematical model of extreme learning machine is: f ( x j ) = Σ i = 1 L β i g i ( x j ) = Σ i = 1 L β i g ( w i · x j + b i ) = t j , Wherein g (x)=g (w ix j+ b i) be excitation function, excitation function can select " Sigmoid " function, " Sine " function, " RBF " function etc., and select " RBF " function at this, its form is G (w i, b i, x)=g (b i|| x-w i||))
G(w i,b i,x)=g(b i||x-w i||) (13);
S4, training ELM, and test with test set, draw optimum L and γ, obtain ELM optimal network model;
S5, preserves optimum ELM network model, when not changing power distribution network, according to current loads pattern, exports switch combination state during loss minimization rapidly.
Described ELM algorithm is as follows:
Given N number of learning sample matrix (x i, y i), ELM corresponding continuous print objective function f (x i), vector x i=[x i1, x i2..., x in] t∈ R n, vectorial y i=[y i1, y i2..., y im] t∈ R m, i=1,2 ... N, and the L of given institute tectonic network single hidden layer node and hidden layer node excitation function g (xi);
Then there is β i, w iand b i, make SLFNs can approach this N number of sample with 0 error, ELM model by mathematical notation is:
f ( x j ) = Σ i = 1 L β i g i ( x j ) = Σ i = 1 L β i g ( w i · x j + b i ) = t j - - - ( 1 ) ;
The ELM mathematical model being applied to two classification is:
f ( x j ) = sign ( Σ i = 1 L β i g i ( x j ) ) = sign ( Σ i = 1 L β i g ( w i · x j + b i ) ) = t j - - - ( 2 ) ;
Wherein, j=1,2 ..., N; Network input weight vectors w i=[w i1, w i2..., w in] t, represent input node and i-th hidden layer node connection weight; b irepresent the deviation of i-th hidden layer node; w ix jrepresent vectorial w iand x jinner product, hidden layer node parameter w iand b irandom generation between [-1,1]; Network exports weight vectors β i=[β i1, β i 2..., β im] t, represent i-th hidden layer node and output node connection weight; I=1,2 ..., L, wherein L is single node in hidden layer;
Represent that N number of formula (1) is by matrix:
Hβ=T (3);
H ( w 1 , · · · , w L , b 1 , · · · , b L , x 1 , · · · , x N ) = g ( w 1 · x 1 + b 1 ) · · · g ( w L · x 1 + b L ) · · · · · · · · · g ( w 1 · x N + b 1 ) · · · g ( w L · x N + b L ) N × L
β = β 1 T · · · β L T L × m , T = t 1 T · · · t N T N × m - - - ( 4 )
Definition H is network hidden layer output matrix; Due to L < < N, H is non-square matrix, as any given w iand b itime, by Penrose-Moore generalized inverse theorem, try to achieve unique solution H -1, then β is:
β=H -1T (5);
By linear most young waiter in a wineshop or an inn's norm and formula (4), acquisition matrix H is:
H = min H | | HH - 1 T - Y | | - - - ( 6 ) ;
Wherein, Y=[y 1, y 2..., y n];
Obtained separating β by matrix H and formula (5), thus determine ELM network parameter, complete ELM network as shown in Figure 6;
ELM network parameter: node in hidden layer L, excitation function g (x) and any w i, b i, the input arbitrarily of x general reference;
Consider simultaneously empiric risk and fiducial range minimum, thus make practical risk minimum, represent with mathematical constraint Optimized model, be:
min J = 1 2 | | &beta; | | 2 + 1 2 &gamma; | | &epsiv; | | 2 - - - ( 7 ) ;
s . t . &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) - y j = &epsiv; j j = 1,2 . . . N - - - ( 8 ) ;
Wherein, ‖ β ‖ 2represent in structural risk minimization and obtained by Edge Distance maximization principle, γ is regularization term constant, the quadratic sum ‖ ε ‖ of error 2represent the precision of matching;
Formula (7), (8) constrained extremal problem are converted into Lagrange function and solve:
l ( &beta; , &epsiv; , &alpha; ) = 1 2 | | &beta; | | 2 + 1 2 &gamma; | | &epsiv; | | 2 - &Sigma; j = 1 N &alpha; j [ &beta; i g ( w i &CenterDot; x j + b i ) - y i - &epsiv; j ] - - - ( 9 ) ;
That is: l ( &beta; , &epsiv; , &alpha; ) = 1 2 | | &beta; | | 2 + &gamma; 2 | | &epsiv; | | 2 - &alpha; ( H&beta; - Y - &epsiv; ) - - - ( 10 ) ;
Wherein α=[α 1, α 2..., α Ν] represent Lagrange multiplier;
Ask the partial derivative of this function and make it equal 0, must condition be minimized:
&PartialD; l &PartialD; &beta; = &beta; T - &alpha;H = 0 &PartialD; l &PartialD; &epsiv; = &gamma;&epsiv; T + &alpha; = 0 &PartialD; l &PartialD; &alpha; = H&beta; - Y - &epsiv; = 0 - - - ( 11 ) ;
Obtained by (11):
&alpha; = - &gamma; ( H&beta; - Y ) T &beta; = ( I &gamma; + H T H ) - 1 H T Y - - - ( 12 ) ;
Wherein, I is unit battle array.
Operating limit learning machine, to when carrying out Control of Voltage containing small power station's power distribution network, first will determine input vector and output vector; At this, input vector is load power consumption and small power station's generated energy, that is:
X=[x 1x 2… x m] T
Output vector is the on off state in power distribution network, that is:
Y=[y 1y 2… y n] T
Wherein m is for carrying out the sum of load in voltage-controlled network and small power station, and n is the number of switches in power distribution network, the element y in Y 1, y 2..., y nrepresent with one group of binary data, 0 represents switch opens, and 1 represents that switch closes.
Therefore, for a kind of method of operation of all corresponding power distribution network of any one group of X, the corresponding a kind of on off state Y of each method of operation, makes the voltage condition of power distribution network reasonable relative to voltage condition corresponding during other on off states; ELM neural network exports weight beta according to training sample determination network i, and hidden layer node L, excitation function g (x) and input parameter w i, b ionly need once set, without the need to iteration, therefore ELM network parameter is determined.
Beneficial effect: in the place that small power station is abundant, during the wet season, to there will be voltage multiple spot out-of-limit for 10kV power distribution network, and generally from 110kV transformer station more away from voltage out-of-limit situation more serious, the method compensated according to multipoint-parallel is invested excessive.The trend distribution in power distribution network can be changed by the change of power distribution network network structure, reduce voltage out-of-limit degree to a great extent, increase paralleling compensating device in the place that voltage out-of-limit is comparatively serious simultaneously, effectively can control the voltage of this point, the voltage of other points can also be made to reduce further, distribution network voltage can be controlled within rational scope when selecting suitable compensation point and compensation capacity.In addition, can be predicted most suitable network structure, compensation point and the compensation capacity in the following short time according to the load level of real-time and history, small power station's generated energy by extreme learning machine network (ELM), change power distribution network network structure in advance, prevent power distribution network from occurring voltage out-of-limit situation.The on off state in power distribution network can be drawn rapidly due to ELM by the operation conditions of power distribution network, make change network structure, selection shunt compensation, control distribution network voltage convenient.
Accompanying drawing explanation
Fig. 1 for a change electricity grid network topological structure and select the power distribution network network structure of embodiment of the method for paralleling compensating device;
Fig. 1-1 is point Fig. 1 of Fig. 1;
Fig. 1-2 is point Fig. 2 of Fig. 1;
Fig. 1-3 is point Fig. 3 of Fig. 1;
Fig. 1-4 is point Fig. 4 of Fig. 1;
Fig. 1-5 is point Fig. 5 of Fig. 1;
Fig. 1-6 is point Fig. 6 of Fig. 1;
Fig. 1-7 is point Fig. 7 of Fig. 1;
Fig. 1-8 is point Fig. 8 of Fig. 1;
Fig. 1-9 is point Fig. 9 of Fig. 1;
Fig. 2 is the voltage pattern of small power station monitoring point when being connected on the A-wire of Fig. 1 embodiment;
Fig. 3 be Fig. 1 embodiment when the first input quantity of wet season after extreme learning machine determination network structure the voltage (1) of monitoring point;
Fig. 4 be Fig. 1 embodiment when wet season the second input quantity after extreme learning machine determination network structure the voltage (2) of monitoring point;
Fig. 5 be Fig. 1 embodiment when the third input quantity of wet season after extreme learning machine determination network structure the voltage (3) of monitoring point;
Fig. 6 is the neural network schematic diagram based on ELM.
Embodiment
For Po Tou transformer station and connect be rich in small power station's power distribution network, the network structure of this power distribution network is as shown in Figure 1.When low water season, small power station's generated energy is less, and when small power station is all connected on A-wire, in power distribution network, the voltage of each point is all within the scope of (10 ± 0.5) Kv, therefore only considers the Control of Voltage problem during wet season.
First wife's electrical network is typical tree structure, and load and small power station are all connected on A-wire, and during the wet season, small power station's generated energy is comparatively large, and the voltage on A-wire increases along with the distance with transformer station and raises, and voltage condition as shown in Figure 2.
Voltage during in order to control the wet season in power distribution network, add a second line on A-wire side, small power station can optionally receive on A-wire or second line, and the position of second line access power distribution network can be head substation secondary side, slope, A-wire stage casing or A-wire afterbody; Select suitable shunt compensation point in this external power distribution network, select suitable compensation capacity, make voltage obtain conservative control further.
In order to make to control that there is foresight containing small power station's distribution network voltage, the load level of power distribution network can be obtained according to historical data in Various Seasonal, the generated energy of small power station is doped according to the rainfall information of locality, and it can be used as the input quantity of extreme learning machine, a process is trained and determines the extreme learning machine network model of parameter, the on off state in power distribution network can be drawn rapidly, as shown in table 1.
Determine that small hydropower station receives the quantity on second line thus, shunt compensation capacity and position in the on-position of second line and power distribution network, thus power distribution network network structure can be changed in advance, guarantee that small power station's generated energy raises suddenly and occurs the situation that distribution network voltage is out-of-limit.
Concrete step is as follows:
S1, to each element divided rank of input vector
To the moon load consumption of power distribution network, the moon small power station's generated energy be divided into 7 levels by the percentage that load consumption or small power station's generated energy account for peak value (input vector divided rank can arrange the size of p according to actual needs, 7 grades are divided into) in the present embodiment, if the sum of small power station and load has n in power distribution network, then the method for operation of power distribution network has p nindividual;
The percentage that load consumption or small power station's generated energy account for peak value be less than or equal to 40% be 1 grade;
The percentage that load consumption or small power station's generated energy account for peak value (40%, 50%] be 2 grades;
The percentage that load consumption or small power station's generated energy account for peak value (50%, 60%] be 3 grades;
The percentage that load consumption or small power station's generated energy account for peak value (60%, 70%] be 4 grades;
The percentage that load consumption or small power station's generated energy account for peak value (70%, 80%] be 5 grades;
The percentage that load consumption or small power station's generated energy account for peak value is 6 grades at (81%, 89%);
The percentage that load consumption or small power station's generated energy account for peak value be more than or equal to 90% be 7 grades;
Described input vector is the input vector of extreme learning machine, that is:
X=[x 1x 2… x m] T
Output vector is the on off state in power distribution network, that is:
Y=[y 1y 2… y n] T
Wherein m is for carrying out the sum of load in voltage-controlled network and small power station, and n is the number of switches in power distribution network, the element y in Y 1, y 2..., y nrepresent with one group of binary data, 0 represents switch opens, and 1 represents that switch closes.
S2, according to network architecture parameters, load and small power station's data, modeling and simulating is carried out to power distribution network, find out network structure corresponding under power distribution network different running method and compensation way, draw many groups training set (X of extreme learning machine, Y) with test set (X ', Y '), (be all prior art, the acquisition of training set and test set is obtained by other simulation softwares) X, X ' represents that load level and small power station's generated energy of power distribution network, Y, Y ' are on off state corresponding with it;
S3, selects the hidden layer nodes combination L of ELM s, structural risk minimization regularization term constant set γ s, excitation function g (x) selects RBF function:
The mathematical model of extreme learning machine is: f ( x j ) = &Sigma; i = 1 L &beta; i g i ( x j ) = &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) = t j , Wherein g (x)=g (w ix j+ b i) be excitation function, excitation function can select " Sigmoid " function, " Sine " function, " RBF " function etc., and select " RBF " function at this, its form is G (w i, b i, x)=g (b i|| x-w i||))
G(w i,b i,x)=g(b i||x-w i||) (13);
S4, training ELM, and test with test set, draw optimum L and γ, obtain ELM optimal network model;
S5, preserves optimum ELM network model, when not changing power distribution network, according to current loads pattern, exports switch combination state during loss minimization rapidly.
Described ELM algorithm is as follows:
Given N number of learning sample matrix (x i, y i), ELM corresponding continuous print objective function f (x i), vector x i=[x i1, x i2..., x in] t∈ R n, vectorial y i=[y i1, y i2..., y im] t∈ R m, i=1,2 ... N, and the L of given institute tectonic network single hidden layer node and hidden layer node excitation function g (xi);
Then there is β i, w iand b i, make SLFNs can approach this N number of sample with 0 error, ELM model by mathematical notation is:
f ( x j ) = &Sigma; i = 1 L &beta; i g i ( x j ) = &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) = t j - - - ( 1 ) ;
The ELM mathematical model being applied to two classification is:
f ( x j ) = sign ( &Sigma; i = 1 L &beta; i g i ( x j ) ) = sign ( &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) ) = t j - - - ( 2 ) ;
Wherein, j=1,2 ..., N; Network input weight vectors w i=[w i1, w i2..., w in] t, represent input node and i-th hidden layer node connection weight; b irepresent the deviation of i-th hidden layer node; w ix jrepresent vectorial w iand x jinner product, hidden layer node parameter w iand b irandom generation between [-1,1]; Network exports weight vectors β i=[β i1, β i 2..., β im] t, represent i-th hidden layer node and output node connection weight; I=1,2 ..., L, wherein L is single node in hidden layer;
Represent that N number of formula (1) is by matrix:
Hβ=T (3);
H ( w 1 , &CenterDot; &CenterDot; &CenterDot; , w L , b 1 , &CenterDot; &CenterDot; &CenterDot; , b L , x 1 , &CenterDot; &CenterDot; &CenterDot; , x N ) = g ( w 1 &CenterDot; x 1 + b 1 ) &CenterDot; &CenterDot; &CenterDot; g ( w L &CenterDot; x 1 + b L ) &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; g ( w 1 &CenterDot; x N + b 1 ) &CenterDot; &CenterDot; &CenterDot; g ( w L &CenterDot; x N + b L ) N &times; L
&beta; = &beta; 1 T &CenterDot; &CenterDot; &CenterDot; &beta; L T L &times; m , T = t 1 T &CenterDot; &CenterDot; &CenterDot; t N T N &times; m - - - ( 4 )
Definition H is network hidden layer output matrix; Due to L < < N, H is non-square matrix, as any given w iand b itime, by Penrose-Moore generalized inverse theorem, try to achieve unique solution H -1, then β is:
β=H -1T (5);
By linear most young waiter in a wineshop or an inn's norm and formula (4), acquisition matrix H is:
H = min H | | HH - 1 T - Y | | - - - ( 6 ) ;
Wherein, Y=[y 1, y 2..., y n];
Obtained separating β by matrix H and formula (5), thus determine ELM network parameter, complete ELM network as shown in Figure 6;
ELM network parameter: node in hidden layer L, excitation function g (x) and any w i, b i, the input arbitrarily of x general reference;
Consider simultaneously empiric risk and fiducial range minimum, thus make practical risk minimum, represent with mathematical constraint Optimized model, be:
min J = 1 2 | | &beta; | | 2 + 1 2 &gamma; | | &epsiv; | | 2 - - - ( 7 ) ;
s . t . &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) - y j = &epsiv; j j = 1,2 . . . N - - - ( 8 ) ;
Wherein, ‖ β ‖ 2represent in structural risk minimization and obtained by Edge Distance maximization principle, γ is regularization term constant, the quadratic sum ‖ ε ‖ of error 2represent the precision of matching;
Formula (7), (8) constrained extremal problem are converted into Lagrange function and solve:
l ( &beta; , &epsiv; , &alpha; ) = 1 2 | | &beta; | | 2 + 1 2 &gamma; | | &epsiv; | | 2 - &Sigma; j = 1 N &alpha; j [ &beta; i g ( w i &CenterDot; x j + b i ) - y i - &epsiv; j ] - - - ( 9 ) ;
That is: l ( &beta; , &epsiv; , &alpha; ) = 1 2 | | &beta; | | 2 + &gamma; 2 | | &epsiv; | | 2 - &alpha; ( H&beta; - Y - &epsiv; ) - - - ( 10 ) ;
Wherein α=[α 1, α 2..., α Ν] represent Lagrange multiplier;
Ask the partial derivative of this function and make it equal 0, must condition be minimized:
&PartialD; l &PartialD; &beta; = &beta; T - &alpha;H = 0 &PartialD; l &PartialD; &epsiv; = &gamma;&epsiv; T + &alpha; = 0 &PartialD; l &PartialD; &alpha; = H&beta; - Y - &epsiv; = 0 - - - ( 11 ) ;
Obtained by (11):
&alpha; = - &gamma; ( H&beta; - Y ) T &beta; = ( I &gamma; + H T H ) - 1 H T Y - - - ( 12 ) ;
Wherein, I is unit battle array.
Operating limit learning machine, to when carrying out Control of Voltage containing small power station's power distribution network, first will determine input vector and output vector; At this, input vector is load power consumption and small power station's generated energy, that is:
X=[x 1x 2… x m] T
Output vector is the on off state in power distribution network, that is:
Y=[y 1y 2… y n] T
Wherein m is for carrying out the sum of load in voltage-controlled network and small power station, and n is the number of switches in power distribution network, the element y in Y 1, y 2..., y nrepresent with one group of binary data, 0 represents switch opens, and 1 represents that switch closes.
Therefore, for a kind of method of operation of all corresponding power distribution network of any one group of X, the corresponding a kind of on off state Y of each method of operation, makes the voltage condition of power distribution network reasonable relative to voltage condition corresponding during other on off states; ELM neural network exports weight beta according to training sample determination network i, and hidden layer node L, excitation function g (x) and input parameter w i, b ionly need once set, without the need to iteration, therefore ELM network parameter is determined.
Fig. 3,4,5 is when the wet season, different input quantity limit of utilization learning machine drew power distribution network network structure, the load point voltage's distribiuting situation in power distribution network.
On-off state corresponding to the various situation of table 1-Fig. 2 to Fig. 5 and compensation capacity
As can be seen from Fig. 3 to Fig. 5, after extreme learning machine training, for given load and small power station's generated energy, rational network structure and parallel reactive compensation can be provided, thus voltage can be controlled in rational scope.

Claims (2)

1. change a method for electricity grid network topological structure and selection paralleling compensating device, it is characterized in that comprising the following steps:
S1, to each element divided rank of input vector
To the moon load consumption of power distribution network, the moon small power station's generated energy percentage of accounting for peak value by load consumption or small power station's generated energy be divided into p=7 level, if the sum of small power station and load has n in power distribution network, then the method for operation of power distribution network has p nindividual;
The percentage that load consumption or small power station's generated energy account for peak value be less than or equal to 40% be 1 grade;
The percentage that load consumption or small power station's generated energy account for peak value (40%, 50%] be 2 grades;
The percentage that load consumption or small power station's generated energy account for peak value (50%, 60%] be 3 grades;
The percentage that load consumption or small power station's generated energy account for peak value (60%, 70%] be 4 grades;
The percentage that load consumption or small power station's generated energy account for peak value (70%, 80%] be 5 grades;
The percentage that load consumption or small power station's generated energy account for peak value is 6 grades at (81%, 89%);
The percentage that load consumption or small power station's generated energy account for peak value be more than or equal to 90% be 7 grades;
Described input vector is the input vector of extreme learning machine, that is:
X=[x 1x 2… x m] T
Output vector is the on off state in power distribution network, that is:
Y=[y 1y 2… y n] T
Wherein m is for carrying out the sum of load in voltage-controlled network and small power station, and n is the number of switches in power distribution network, the element y in Y 1, y 2..., y nrepresent with one group of binary data, 0 represents switch opens, and 1 represents that switch closes;
S2, according to network architecture parameters, load and small power station's data, modeling and simulating is carried out to power distribution network, find out network structure corresponding under power distribution network different running method and compensation way, draw many groups training set (X of extreme learning machine, Y) with test set (X ', Y ');
Wherein X, X ' represents load level and small power station's generated energy of power distribution network, and Y, Y ' is on off state corresponding with it;
S3, selects the hidden layer nodes combination L of ELM s, structural risk minimization regularization term constant set γ s
The mathematical model of extreme learning machine is: f ( x j ) = &Sigma; i = 1 L &beta; i g i ( x j ) = &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) = t j ;
Wherein g (x)=g (w ix j+ b i) be excitation function, excitation function selects RBF function, and its form is:
G(w i,b i,x)=g(b i||x-w i||) (13);
S4, training ELM, and test with test set, draw optimum L and γ, obtain ELM optimal network model;
S5, preserves optimum ELM network model, when not changing power distribution network, according to current loads pattern, exports switch combination state during loss minimization rapidly.
2. the method for change electricity grid network topological structure according to claim 1 and selection paralleling compensating device, is characterized in that: described ELM algorithm is as follows:
Given N number of learning sample matrix (x i, y i), ELM corresponding continuous print objective function f (x i), vector x i=[x i1, x i2..., x in] t∈ R n, vectorial y i=[y i1, y i2..., y im] t∈ R m, i=1,2 ... N, and the L of given institute tectonic network single hidden layer node and hidden layer node excitation function g (xi);
Then there is β i, w iand b i, make SLFNs can approach this N number of sample with 0 error, ELM model by mathematical notation is:
f ( x j ) = &Sigma; i = 1 L &beta; i g i ( x j ) = &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) = t j - - - ( 1 ) ;
The ELM mathematical model being applied to two classification is:
f ( x j ) = sign ( &Sigma; i = 1 L &beta; i g i ( x j ) ) = sign ( &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b j ) ) = t j - - - ( 2 ) ;
Wherein, j=1,2 ..., N; Network input weight vectors w i=[w i1, w i2..., w in] t, represent input node and i-th hidden layer node connection weight; b irepresent the deviation of i-th hidden layer node; w ix jrepresent vectorial w iand x jinner product, hidden layer node parameter w iand b irandom generation between [-1,1]; Network exports weight vectors β i=[β i1, β i 2..., β im] t, represent i-th hidden layer node and output node connection weight; I=1,2 ..., L, wherein L is single node in hidden layer;
Represent that N number of formula (1) is by matrix:
Hβ=T (3);
H ( w 1 , &CenterDot; &CenterDot; &CenterDot; , w L , b 1 , &CenterDot; &CenterDot; &CenterDot; , b L , x 1 , &CenterDot; &CenterDot; &CenterDot; , x N ) = g ( w 1 &CenterDot; x 1 + b 1 ) &CenterDot; &CenterDot; &CenterDot; g ( w L &CenterDot; x 1 + b L ) &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; g ( w 1 &CenterDot; x N + b 1 ) &CenterDot; &CenterDot; &CenterDot; g ( w L &CenterDot; x N + b L ) N &times; L
&beta; = &beta; 1 T &CenterDot; &CenterDot; &CenterDot; &beta; L T L &times; m , T = t 1 T &CenterDot; &CenterDot; &CenterDot; t N T N &times; m - - - ( 4 ) ;
Definition H is network hidden layer output matrix; Due to L < < N, H is non-square matrix, as any given w iand b itime, by Penrose-Moore generalized inverse theorem, try to achieve unique solution H -1, then β is:
β=H -1T (5);
By linear most young waiter in a wineshop or an inn's norm and formula (4), acquisition matrix H is:
H = min H | | HH - 1 T - Y | | - - - ( 6 ) ;
Wherein, Y=[y 1, y 2..., y n];
Obtained separating β by matrix H and formula (5), thus determine ELM network parameter, complete ELM network as shown in Figure 6;
ELM network parameter: node in hidden layer L, excitation function g (x) and any w i, b i, the input arbitrarily of x general reference;
Consider simultaneously empiric risk and fiducial range minimum, thus make practical risk minimum, represent with mathematical constraint Optimized model, be:
min J = 1 2 | | &beta; | | 2 + 1 2 &gamma; | | &epsiv; | | 2 - - - ( 7 ) ;
s . t . &Sigma; i = 1 L &beta; i g ( w i &CenterDot; x j + b i ) - y j = &epsiv; j , j = 1,2 . . . N - - - ( 8 ) ;
Wherein, ‖ β ‖ 2represent in structural risk minimization and obtained by Edge Distance maximization principle, γ is regularization term constant, the quadratic sum ‖ ε ‖ of error 2represent the precision of matching;
Formula (7), (8) constrained extremal problem are converted into Lagrange function and solve:
l ( &beta; , &epsiv; , &alpha; ) = 1 2 | | &beta; | | 2 + 1 2 &gamma; | | &epsiv; | | 2 - &Sigma; j = 1 N &alpha; j [ &beta; i g ( w i &CenterDot; x j + b j ) - y i - &epsiv; j ] - - - ( 9 ) ;
That is: l ( &beta; , &epsiv; , &alpha; ) = 1 2 | | &beta; | | 2 + &gamma; 2 | | &epsiv; | | 2 - &alpha; ( H&beta; - Y - &epsiv; ) - - - ( 10 ) ;
Wherein α=[α 1, α 2..., α Ν] represent Lagrange multiplier;
Ask the partial derivative of this function and make it equal 0, must condition be minimized:
&PartialD; l &PartialD; &beta; = &beta; T - &alpha;H = 0 &PartialD; l &PartialD; &epsiv; = &gamma; &epsiv; T + &alpha; = 0 &PartialD; l &PartialD; &alpha; = H&beta; - Y - &epsiv; = 0 - - - ( 11 ) ;
Obtained by (11):
&alpha; = - &gamma; ( H&beta; - Y ) T &beta; = ( I &gamma; + H T H ) - 1 H T Y - - - ( 12 ) ;
Wherein, I is unit battle array;
Operating limit learning machine, to when carrying out Control of Voltage containing small power station's power distribution network, first will determine input vector and output vector; At this, input vector is load power consumption and small power station's generated energy, that is:
X=[x 1x 2… x m] T
Output vector is the on off state in power distribution network, that is:
Y=[y 1y 2… y n] T
Wherein m is for carrying out the sum of load in voltage-controlled network and small power station, and n is the number of switches in power distribution network, the element y in Y 1, y 2..., y nrepresent with one group of binary data, 0 represents switch opens, and 1 represents that switch closes.
CN201510072840.3A 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device Active CN104700205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510072840.3A CN104700205B (en) 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510072840.3A CN104700205B (en) 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device

Publications (2)

Publication Number Publication Date
CN104700205A true CN104700205A (en) 2015-06-10
CN104700205B CN104700205B (en) 2018-05-04

Family

ID=53347298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510072840.3A Active CN104700205B (en) 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device

Country Status (1)

Country Link
CN (1) CN104700205B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160437A (en) * 2015-09-25 2015-12-16 国网浙江省电力公司 Load model prediction method based on extreme learning machine
CN109389253A (en) * 2018-11-09 2019-02-26 国网四川省电力公司电力科学研究院 A kind of frequency predication method after Power System Disturbances based on credible integrated study
CN109540522A (en) * 2018-11-16 2019-03-29 北京航空航天大学 Bearing health quantifies modeling method, device and server
CN109951336A (en) * 2019-03-24 2019-06-28 西安电子科技大学 Electric power transportation network optimization method based on gradient descent algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103698699A (en) * 2013-12-06 2014-04-02 西安交通大学 Asynchronous motor fault monitoring and diagnosing method based on model
CN104299043A (en) * 2014-06-13 2015-01-21 国家电网公司 Ultra-short-term load prediction method of extreme learning machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103698699A (en) * 2013-12-06 2014-04-02 西安交通大学 Asynchronous motor fault monitoring and diagnosing method based on model
CN104299043A (en) * 2014-06-13 2015-01-21 国家电网公司 Ultra-short-term load prediction method of extreme learning machine

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MUTAO HUANG ;YONG TIAN: "A novel visual modeling system for time series forecast: application to the domain of hydrology", 《JOURNAL OF HYDROINFORMATICS》 *
叶晴炜: "小水电综合自动化系统的研究与开发", 《万方学术期刊数据库》 *
徐晟,蒋铁铮,向磊: "ELM算法在微电网超短期负荷预测的应用", 《电器开关》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160437A (en) * 2015-09-25 2015-12-16 国网浙江省电力公司 Load model prediction method based on extreme learning machine
CN109389253A (en) * 2018-11-09 2019-02-26 国网四川省电力公司电力科学研究院 A kind of frequency predication method after Power System Disturbances based on credible integrated study
CN109389253B (en) * 2018-11-09 2022-04-15 国网四川省电力公司电力科学研究院 Power system frequency prediction method after disturbance based on credibility ensemble learning
CN109540522A (en) * 2018-11-16 2019-03-29 北京航空航天大学 Bearing health quantifies modeling method, device and server
CN109951336A (en) * 2019-03-24 2019-06-28 西安电子科技大学 Electric power transportation network optimization method based on gradient descent algorithm
CN109951336B (en) * 2019-03-24 2021-05-18 西安电子科技大学 Electric power transportation network optimization method based on gradient descent algorithm

Also Published As

Publication number Publication date
CN104700205B (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN104037793B (en) A kind of energy-storage units capacity collocation method being applied to active distribution network
Ranamuka et al. Flexible AC power flow control in distribution systems by coordinated control of distributed solar-PV and battery energy storage units
CN106329523A (en) Active power distribution network intelligent soft switch robust optimization modeling method taking uncertainty into consideration
CN103150606A (en) Optimal power flow optimization method of distributed power supplies
CN111049171B (en) Active power distribution network energy storage configuration method
Tang et al. Study on day-ahead optimal economic operation of active distribution networks based on Kriging model assisted particle swarm optimization with constraint handling techniques
CN106600459A (en) Optimization method for overcoming voltage deviation of photovoltaic access point
CN107947192A (en) A kind of optimal reactive power allocation method of droop control type isolated island micro-capacitance sensor
CN103593711B (en) A kind of distributed power source Optimal Configuration Method
CN104377826A (en) Active power distribution network control strategy and method
CN106253338A (en) A kind of micro-capacitance sensor stable control method based on adaptive sliding-mode observer
CN109638873A (en) A kind of distributed photovoltaic cluster Optimization Scheduling and system
CN104700205A (en) Power grid network topology structure changing and parallel compensation device selecting method
CN108717608A (en) Million kilowatt beach photovoltaic plant accesses electric network synthetic decision-making technique and system
CN102163845B (en) Optimal configuration method of distributed generations (DG) based on power moment algorithm
CN108933448B (en) Coordination control method and system for medium and low voltage distribution network containing photovoltaic power supply
CN104993525A (en) Active power distribution network coordination optimization control method considering ZIP loads
CN103490428A (en) Method and system for allocation of reactive compensation capacity of microgrid
CN104578091A (en) Non-delay optimal reactive power coordinated control system and method for multisource-containing power grid
CN106786550A (en) A kind of distributed control method and device of micro-capacitance sensor cost optimization
Saadaoui et al. Hybridization and energy storage high efficiency and low cost
CN105896613B (en) A kind of micro-capacitance sensor distribution finite-time control method for considering communication time lag
CN105071397A (en) Coordinated reactive voltage control method of different reactive compensation devices of wind power delivery
CN109560568A (en) Double-fed fan motor field maximum based on short circuit current nargin can access capacity determining methods
Shi et al. Reactive power optimization of an active distribution network including a solid state transformer using a moth swarm algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant