CN104700205B - A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device - Google Patents

A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device Download PDF

Info

Publication number
CN104700205B
CN104700205B CN201510072840.3A CN201510072840A CN104700205B CN 104700205 B CN104700205 B CN 104700205B CN 201510072840 A CN201510072840 A CN 201510072840A CN 104700205 B CN104700205 B CN 104700205B
Authority
CN
China
Prior art keywords
mrow
msub
mtd
mtr
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510072840.3A
Other languages
Chinese (zh)
Other versions
CN104700205A (en
Inventor
宋旭东
余南华
徐衍会
周克林
陈辉
张晓平
陈小军
李传健
郑文杰
唐秀朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Guangdong Power Grid Co Ltd filed Critical Electric Power Research Institute of Guangdong Power Grid Co Ltd
Priority to CN201510072840.3A priority Critical patent/CN104700205B/en
Publication of CN104700205A publication Critical patent/CN104700205A/en
Application granted granted Critical
Publication of CN104700205B publication Critical patent/CN104700205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device:S1, to input vector divided rank;S2, is emulated according to network architecture parameters, load and small power station's data, finds out corresponding network structure and compensation way under power distribution network different running method, draws the multigroup training set (X, Y) and test set (X ', Y ') of extreme learning machine;S3, selects the hidden layer number of nodes of ELM to combine Ls, structural risk minimization regularization term constant set γs, excitation function g (x) selection RBF functions;S4, training ELM, and carry out test and draw optimal L and γ, obtain ELM optimal network models;S5, exports switch combination state during loss minimization.The present invention reflects the Nonlinear Mapping relation and generalization ability between input variable and output variable by ELM, to establish the correspondence between the load level of change, small power station's generated energy and the satisfactory network topology structure of voltage and shunt compensation mode.

Description

A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device
Technical field
The present invention relates to it is a kind of change electricity grid network topological structure and select paralleling compensating device method, more particularly, to One kind changes network topology structure and the method for reasonably selecting paralleling compensating device based on extreme learning machine method (ELM).
Background technology
Small power station as a kind of cleanliness without any pollution, renewable and with good ecology with social benefit green energy resource, its Importance becomes increasingly conspicuous.In place of the hydroelectric resources compared with horn of plenty, the Devoting Major Efforts To Developing of small power station, which not only facilitates, alleviates power grid electricity Hypodynamic phenomenon, has also driven the development of local economy.In the new century, country proposes the requirement of energy-saving and emission-reduction, scientific development Afterwards, small power station is even more to have obtained rapid development.
But the form that small power station generates electricity in a distributed manner generates electricity in power distribution network large-scale grid connection, changes traditional distribution Network operation mode, makes power grid from original passive network is changed into active electric network, unidirectional trend is changed into bi-directional current, causes voltage point Cloth inequality and voltage fluctuation;In addition, small power station, under different sale of electricity agreements, from the United Dispatching management of power grid, it is relatively only Vertical operation characteristic shows as " unordered grid-connected " behavior, and the stable operation to existing power distribution network causes extreme shock, also to major network Pressure regulation brings huge challenge.The unordered grid-connected voltage control problem of small power station become a reality operation control and management in difficulty Point.On the premise of new energy is greatly developed in national requirements, enrich area in small hydropower resources, a large amount of small power stations it is unordered simultaneously Net, it is especially desirable to realize " orderly " management.
Installing paralleling compensating device can improve the voltage problem in network to a certain extent in power distribution network, but for richness Power distribution network containing small power station, it is out-of-limit at the same time that voltage is often multiple spot, and the out-of-limit degree of each point is different, is needed to improve voltage's distribiuting Paralleling compensating device is installed in multiple spot, investment is larger;In addition, the change of the distribution network voltage rich in small power station is in seasonality. Different season rainfall is different, and the generated energy of small power station is also different, and each season load light and heavy degree is different, this results in difference Season voltage out-of-limit degree is different.Therefore, when small power station is unordered grid-connected, it is necessary to according to load, rainfall, small power station's generated energy, net The information such as network structure, to the voltage of power distribution network in real time, flexibly control.By suitably changing power distribution network network structure, make Power distribution network containing small power station realizes " ordering ", and suitably selects shunt compensation point and shunt compensation capacity, can be both economical, conveniently Ground improves the voltage problem of the power distribution network containing small power station.
At present, the topological structure of power distribution network containing small power station it is flexible change and shunt compensation can by the connection of switch with Closure realizes, and when with power distribution network each point voltage meet the requirements for target when, conventional mathematical model is difficult to directly establish rain Relation between the state switched when amount information, small power station's generated energy, load level and voltage meet the requirements in power distribution network.
On the other hand, the extreme learning machine (ELM) developed from single hidden layer Feedback Neural Network (SLFNs) can be anti- The relation between information and optimum network topological structure such as small power station's generated energy, distribution network load level is reflected, this is to pass through distribution Control of the change of net topology structure to realize voltage provides method.
The ELM (Extreme Learning Machine) is 2006 yellow wide refined by Nanyang Polytechnics of Singapore A kind of new single hidden layer Feedback Neural Network SLFNs (Single-hidden Layer Feed-forward that professor proposes Neural Networks) learning machine.ELM ensure network with it is simple in structure, pace of learning is fast while, utilize Penrose-Moore generalized inverses solve network weight, obtain less weight norm, avoid and decline learning method based on gradient The problems of generation, such as local minimum iterations are excessive, performance indicator and learning rate determine, can obtain good net Network Generalization Capability.ELM can be used to the non-linear relation between reflection distribution network load pattern and power distribution network optimum structure, more Applied in a field.
Research confirms, for the finite aggregate of N number of different instances, one at most only needs having for N number of hidden layer node non- The SLFNs of LINEAR CONTINUOUS excitation function, it is possible to free from errors approach this N number of example.
The content of the invention
The technical problems to be solved by the invention, are just to provide a kind of based on extreme learning machine method change network topology structure And the method for selection paralleling compensating device, by ELM reflect Nonlinear Mapping relation between input variable and output variable and Generalization ability, to establish the load level of change, small power station's generated energy and the satisfactory network topology structure of voltage and simultaneously Join the correspondence between compensation way.
Above-mentioned technical problem is solved, the technical solution adopted by the present invention is as follows:
A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device, it is characterized in that including following step Suddenly:
S1, to each element divided rank of input vector
To power distribution network the moon load consumption, moon small power station's generated energy account for peak value by load consumption or small power station's generated energy Percentage be divided into 7 levels (size that input vector divided rank can set p according to being actually needed, draw in embodiment It is divided into 7 grades), if the sum of small power station and load has n in power distribution network, the method for operation of power distribution network has pnIt is a;
Load consumption or small power station's generated energy account for the percentage of peak value less than or equal to 40% for 1 grade;
Load consumption or small power station's generated energy account for the percentage of peak value (40%, 50%] for 2 grades;
Load consumption or small power station's generated energy account for the percentage of peak value (50%, 60%] for 3 grades;
Load consumption or small power station's generated energy account for the percentage of peak value (60%, 70%] for 4 grades;
Load consumption or small power station's generated energy account for the percentage of peak value (70%, 80%] for 5 grades;
It is 6 grades that load consumption or small power station's generated energy, which account for the percentage of peak value at (81%, 89%),;
Load consumption or small power station's generated energy account for the percentage of peak value more than or equal to 90% for 7 grades;
The input vector is the input vector of extreme learning machine, i.e.,:
X=[x1 x2 … xm]T
Output vector is the on off state in power distribution network, i.e.,:
Y=[y1 y2 … yn]T
For wherein m to be carried out the load in voltage-controlled network and the sum of small power station, n is the switch in power distribution network Quantity, the element y in Y1, y2..., ynRepresented with one group of binary data, 0 represents that switch is opened, and 1 represents switch closure.
S2, is modeled power distribution network emulation according to network architecture parameters, load and small power station's data, finds out power distribution network not Corresponding network structure and compensation way with the method for operation, draw the multigroup training set (X, Y) and test set of extreme learning machine (X ', Y '), (being the prior art, training set and test set are obtained by the acquisition of other simulation softwares) X, X ' represent distribution The load level and small power station's generated energy of net, Y, Y ' are corresponding on off state;
S3, selects the hidden layer number of nodes of ELM to combine Ls, structural risk minimization regularization term constant set γs, encourage letter Number g (x) selection RBF functions:
(mathematical model of extreme learning machine is:Wherein g (x) =g (wi·xj+bi) it is excitation function, excitation function can select " Sigmoid " function, " Sine " function, " RBF " function etc., " RBF " function is selected herein, its form is G (wi,bi, x) and=g (bi||x-wi||))
G(wi,bi, x) and=g (bi||x-wi||) (13);
S4, training ELM, and tested with test set, draw optimal L and γ, obtain ELM optimal network models;
S5, preserves optimal ELM network models, in the case where not changing power distribution network, according to current loads pattern, rapidly Export switch combination state during loss minimization.
The ELM algorithms are as follows:
Give N number of learning sample matrix (xi, yi), ELM corresponds to continuous object function f (xi), vector xi=[xi1, xi2..., xin]T∈Rn, vectorial yi=[yi1, yi2..., yim]T∈Rm, i=1,2 ... N, and L of given institute tectonic network is single Hidden layer node and hidden layer node excitation function g (xi);
Then there are βi、wiAnd bi, SLFNs is approached this N number of sample with 0 error, ELM models are by mathematical notation:
It is applied to the two ELM mathematical models classified:
Wherein, j=1,2 ..., N;Network inputs weight vectors wi=[wi1, wi2..., win]T, represent input node and i-th A hidden layer node connection weight;biRepresent the deviation of i-th of hidden layer node;wi·xjRepresent vector wiAnd xjInner product, it is hidden The w of parameter containing node layeriAnd biProduced at random between [- 1,1];Network output weight vectors βi=[βi1, β i2..., βim]T, table Show i-th of hidden layer node and output node connection weight;I=1,2 ..., L, wherein L are single node in hidden layer;
Represent that N number of formula (1) is by matrix:
H β=T (3);
Definition H is network hidden layer output matrix;Since L < < N, H are non-square matrix, as any given wiAnd biWhen, by Penrose-Moore broad sense inverse theorems, try to achieve unique solution H-1, then β be:
β=H-1T (5);
By linearly most young waiter in a wineshop or an inn's norm and formula (4), obtaining matrix H is:
Wherein, Y=[y1, y2..., yN];
Solution β is obtained by matrix H and formula (5), so that it is determined that ELM network parameters, it is as shown in Figure 6 to complete ELM networks;
ELM network parameters:Node in hidden layer L, excitation function g (x) and any wi、bi, x refers to any input;
Consider that empiric risk and fiducial range are minimum at the same time, so that practical risk is minimum, with mathematical constraint Optimized model Expression be then:
Wherein,Represent to be obtained by Edge Distance maximization principle in structural risk minimization, γ is regularization term Constant, the quadratic sum ‖ ε ‖ of error2Represent the precision of fitting;
Formula (7), (8) constrained extremal problem are converted into Lagrange functions to solve:
I.e.:
Wherein α=[α1, α2..., αΝ] represent Lagrange multipliers;
Seek the partial derivative of the function and make it equal to 0, obtain minimum condition:
Obtained by (11):
Wherein, I is unit battle array.
When operating limit learning machine carries out voltage control to power distribution network containing small power station, first have to determine input vector and output Vector;Here, input vector is load power consumption and small power station's generated energy, i.e.,:
X=[x1 x2 … xm]T
Output vector is the on off state in power distribution network, i.e.,:
Y=[y1 y2 … yn]T
For wherein m to be carried out the load in voltage-controlled network and the sum of small power station, n is the switch in power distribution network Quantity, the element y in Y1, y2..., ynRepresented with one group of binary data, 0 represents that switch is opened, and 1 represents switch closure.
Therefore, a kind of method of operation of power distribution network is all corresponded to for any one group of X, each method of operation is all corresponding a kind of On off state Y so that the voltage condition of power distribution network is reasonable relative to corresponding voltage condition during other on off states;ELM nerves Network determines that network exports weight beta according to training samplei, and hidden layer node L, excitation function g (x) and input parameter wi, biOnly Need to once it set, without iteration, therefore ELM network parameters are determined.
Beneficial effect:At the place that small power station enriches, wet season, 10kV power distribution networks occur that voltage multiple spot is out-of-limit, and Generally more serious from the more remote voltage out-of-limit situation of 110kV substations, the method investment according to multipoint-parallel compensation is excessive.Pass through The change of power distribution network network structure can change the trend distribution in power distribution network, largely reduce voltage out-of-limit degree, together When in the more serious place increase paralleling compensating device of voltage out-of-limit, can effectively control the voltage of the point, moreover it is possible to make other The voltage of point further reduces, and can control distribution network voltage rational when selecting suitable compensation point and compensation capacity Within the scope of.In addition, it can be generated electricity by extreme learning machine network (ELM) according to real-time and history load level, small power station Amount is predicted most suitable network structure, compensation point and the compensation capacity in the following short time, changes power distribution network net in advance Network structure, prevents power distribution network from voltage out-of-limit situation occur.Since ELM can draw power distribution network rapidly by the operation conditions of power distribution network In on off state so that change network structure, selection shunt compensation, control distribution network voltage it is more convenient.
Brief description of the drawings
Fig. 1 is the power distribution network network knot for the embodiment of the method for changing electricity grid network topological structure and selecting paralleling compensating device Composition;
Fig. 1-1 is the component 1 of Fig. 1;
Fig. 1-2 is the component 2 of Fig. 1;
Fig. 1-3 is the component 3 of Fig. 1;
Fig. 1-4 is the component 4 of Fig. 1;
Fig. 1-5 is the component 5 of Fig. 1;
Fig. 1-6 is the component 6 of Fig. 1;
Fig. 1-7 is the component 7 of Fig. 1;
Fig. 1-8 is the component 8 of Fig. 1;
Fig. 1-9 is the component 9 of Fig. 1;
Fig. 2 is the voltage pattern that small power station is all connected on monitoring point when on the A-wire of Fig. 1 embodiments;
Fig. 3 for Fig. 1 embodiments in the case of wet season the first input quantity after extreme learning machine determines network structure The voltage (1) of monitoring point;
Fig. 4 for Fig. 1 embodiments in the case of second of input quantity of wet season after extreme learning machine determines network structure The voltage (2) of monitoring point;
Fig. 5 for Fig. 1 embodiments in the case of wet season the third input quantity after extreme learning machine determines network structure The voltage (3) of monitoring point;
Fig. 6 is the neutral net schematic diagram based on ELM.
Embodiment
By Po Tou substations and its connected be rich in small power station's power distribution network exemplified by, network structure such as Fig. 1 of the power distribution network It is shown.At dry season, small power station's generated energy is smaller, when small power station is all connected on A-wire, the voltage of each point in power distribution network In the range of (10 ± 0.5) Kv, thus only consider the wet season when voltage control problem.
First wife's power grid is typical tree, and load and small power station are all connected on A-wire, and small power station generates electricity during the wet season Measure larger, voltage on A-wire is raised with increasing with the distance of substation, and voltage condition is as shown in Figure 2.
Voltage during in order to control the wet season in power distribution network, adds a second line, small power station can be selective beside A-wire Be connected to A-wire either on second line the position of second line access power distribution network can be slope head substation secondary side, A-wire stage casing or A-wire afterbody;In addition suitable shunt compensation point is selected in power distribution network, suitable compensation capacity is selected, further obtains voltage To reasonable control.
In order to make the control of distribution network voltage containing small power station that there is foresight, can be matched somebody with somebody in Various Seasonal according to historical data The load level of power grid, the generated energy of small power station is predicted according to local rainfall information, and as extreme learning machine Input quantity, for one by training and the extreme learning machine network model of definite parameter, can draw in power distribution network rapidly On off state, as shown in table 1.
Thereby determine that the quantity that small hydropower station is connected on second line, shunt compensation holds in the on-position of second line and power distribution network Amount and position, so as to change power distribution network network structure in advance, it is ensured that small power station's generated energy raises and power distribution network occurs suddenly The situation of voltage out-of-limit.
Specific step is as follows:
S1, to each element divided rank of input vector
To power distribution network the moon load consumption, moon small power station's generated energy account for peak value by load consumption or small power station's generated energy Percentage be divided into 7 levels (size that input vector divided rank can set p according to being actually needed, in the present embodiment It is divided into 7 grades), if the sum of small power station and load has n in power distribution network, the method for operation of power distribution network has pnIt is a;
Load consumption or small power station's generated energy account for the percentage of peak value less than or equal to 40% for 1 grade;
Load consumption or small power station's generated energy account for the percentage of peak value (40%, 50%] for 2 grades;
Load consumption or small power station's generated energy account for the percentage of peak value (50%, 60%] for 3 grades;
Load consumption or small power station's generated energy account for the percentage of peak value (60%, 70%] for 4 grades;
Load consumption or small power station's generated energy account for the percentage of peak value (70%, 80%] for 5 grades;
It is 6 grades that load consumption or small power station's generated energy, which account for the percentage of peak value at (81%, 89%),;
Load consumption or small power station's generated energy account for the percentage of peak value more than or equal to 90% for 7 grades;
The input vector is the input vector of extreme learning machine, i.e.,:
X=[x1 x2 … xm]T
Output vector is the on off state in power distribution network, i.e.,:
Y=[y1 y2 … yn]T
For wherein m to be carried out the load in voltage-controlled network and the sum of small power station, n is the switch in power distribution network Quantity, the element y in Y1, y2..., ynRepresented with one group of binary data, 0 represents that switch is opened, and 1 represents switch closure.
S2, is modeled power distribution network emulation according to network architecture parameters, load and small power station's data, finds out power distribution network not Corresponding network structure and compensation way with the method for operation, draw the multigroup training set (X, Y) and test set of extreme learning machine (X ', Y '), (being the prior art, training set and test set are obtained by the acquisition of other simulation softwares) X, X ' represent distribution The load level and small power station's generated energy of net, Y, Y ' are corresponding on off state;
S3, selects the hidden layer number of nodes of ELM to combine Ls, structural risk minimization regularization term constant set γs, encourage letter Number g (x) selection RBF functions:
The mathematical model of extreme learning machine is:Wherein g (x) =g (wi·xj+bi) it is excitation function, excitation function can select " Sigmoid " function, " Sine " function, " RBF " function etc., " RBF " function is selected herein, its form is G (wi,bi, x) and=g (bi||x-wi||))
G(wi,bi, x) and=g (bi||x-wi||) (13);
S4, training ELM, and tested with test set, draw optimal L and γ, obtain ELM optimal network models;
S5, preserves optimal ELM network models, in the case where not changing power distribution network, according to current loads pattern, rapidly Export switch combination state during loss minimization.
The ELM algorithms are as follows:(Fig. 6 is the neutral net schematic diagram based on ELM)
Give N number of learning sample matrix (xi, yi), ELM corresponds to continuous object function f (xi), vector xi=[xi1, xi2..., xin]T∈Rn, vectorial yi=[yi1, yi2..., yim]T∈Rm, i=1,2 ... N, and L of given institute tectonic network is single Hidden layer node and hidden layer node excitation function g (xi);
Then there are βi、wiAnd bi, SLFNs is approached this N number of sample with 0 error, ELM models are by mathematical notation:
It is applied to the two ELM mathematical models classified:
Wherein, j=1,2 ..., N;Network inputs weight vectors wi=[wi1, wi2..., win]T, represent input node and i-th A hidden layer node connection weight;biRepresent the deviation of i-th of hidden layer node;wi·xjRepresent vector wiAnd xjInner product, it is hidden The w of parameter containing node layeriAnd biProduced at random between [- 1,1];Network output weight vectors βi=[βi1, β i2..., βim]T, table Show i-th of hidden layer node and output node connection weight;I=1,2 ..., L, wherein L are single node in hidden layer;
Represent that N number of formula (1) is by matrix:
H β=T (3);
Definition H is network hidden layer output matrix;Since L < < N, H are non-square matrix, as any given wiAnd biWhen, by Penrose-Moore broad sense inverse theorems, try to achieve unique solution H-1, then β be:
β=H-1T (5);
By linearly most young waiter in a wineshop or an inn's norm and formula (4), obtaining matrix H is:
Wherein, Y=[y1, y2..., yN];
Solution β is obtained by matrix H and formula (5), so that it is determined that ELM network parameters, it is as shown in Figure 6 to complete ELM networks;
ELM network parameters:Node in hidden layer L, excitation function g (x) and any wi、bi, x refers to any input;
Consider that empiric risk and fiducial range are minimum at the same time, so that practical risk is minimum, with mathematical constraint Optimized model Expression be then:
Wherein,Represent to be obtained by Edge Distance maximization principle in structural risk minimization, γ is regularization term Constant, the quadratic sum ‖ ε ‖ of error2Represent the precision of fitting;
Formula (7), (8) constrained extremal problem are converted into Lagrange functions to solve:
I.e.:
Wherein α=[α1, α2..., αΝ] represent Lagrange multipliers;
Seek the partial derivative of the function and make it equal to 0, obtain minimum condition:
Obtained by (11):
Wherein, I is unit battle array.
When operating limit learning machine carries out voltage control to power distribution network containing small power station, first have to determine input vector and output Vector;Here, input vector is load power consumption and small power station's generated energy, i.e.,:
X=[x1 x2 … xm]T
Output vector is the on off state in power distribution network, i.e.,:
Y=[y1 y2 … yn]T
For wherein m to be carried out the load in voltage-controlled network and the sum of small power station, n is the switch in power distribution network Quantity, the element y in Y1, y2..., ynRepresented with one group of binary data, 0 represents that switch is opened, and 1 represents switch closure.
Therefore, a kind of method of operation of power distribution network is all corresponded to for any one group of X, each method of operation is all corresponding a kind of On off state Y so that the voltage condition of power distribution network is reasonable relative to corresponding voltage condition during other on off states;ELM nerves Network determines that network exports weight beta according to training samplei, and hidden layer node L, excitation function g (x) and input parameter wi, biOnly Need to once it set, without iteration, therefore ELM network parameters are determined.
When Fig. 3,4,5 draw power distribution network network structure for wet season difference input quantity limit of utilization learning machine, in power distribution network Load point voltage's distribiuting situation.
On-off state and compensation capacity corresponding to the various situations of table 1- Fig. 2 to Fig. 5
It can be seen that from Fig. 3 to Fig. 5 after being trained by extreme learning machine, generate electricity for given load and small power station Amount, can provide rational network structure and parallel reactive compensation, so that voltage can be controlled in rational scope.

Claims (2)

  1. A kind of 1. method for changing electricity grid network topological structure and selecting paralleling compensating device, it is characterized in that comprising the following steps:
    S1, to each element divided rank of input vector;
    To power distribution network the moon load consumption, moon small power station's generated energy account for the hundred of peak value by load consumption or small power station's generated energy Fraction is divided into p=7 level, if the sum of small power station and load has n in power distribution network, the method for operation of power distribution network has pn It is a;
    Load consumption or small power station's generated energy account for the percentage of peak value less than or equal to 40% for 1 grade;
    Load consumption or small power station's generated energy account for the percentage of peak value (40%, 50%] for 2 grades;
    Load consumption or small power station's generated energy account for the percentage of peak value (50%, 60%] for 3 grades;
    Load consumption or small power station's generated energy account for the percentage of peak value (60%, 70%] for 4 grades;
    Load consumption or small power station's generated energy account for the percentage of peak value (70%, 80%] for 5 grades;
    It is 6 grades that load consumption or small power station's generated energy, which account for the percentage of peak value at (81%, 89%),;
    Load consumption or small power station's generated energy account for the percentage of peak value more than or equal to 90% for 7 grades;
    The input vector is the input vector of extreme learning machine, i.e.,:
    X=[x1 x2 … xn]T
    Output vector is the on off state in power distribution network, i.e.,:
    Y=[y1 y2 … ym]T
    For wherein n to be carried out the load in voltage-controlled network and the sum of small power station, m is the switch number in power distribution network Measure, the element y in Y1, y2..., ymRepresented with one group of binary data, 0 represents that switch is opened, and 1 represents switch closure;
    S2, power distribution network is modeled emulation according to network architecture parameters, load and small power station's data, finds out the different fortune of power distribution network Corresponding network structure and compensation way under line mode, draw multigroup training set (X, Y) of extreme learning machine and test set (X ', Y ');
    Wherein X, X ' represent the load level and small power station's generated energy of power distribution network, and Y, Y ' are corresponding on off state;
    S3, selects the hidden layer number of nodes of ELM to combine Ls, structural risk minimization regularization term constant set γs
    The mathematical model of extreme learning machine is:
    Wherein, i and j is respectively implicit layer number and the subscript of sample size;L is single node in hidden layer, βiTo export weight, g (Xj)=g (wi·Xj+bi) it is excitation function, wiFor input weight, biFor the biasing of i-th of Hidden unit,
    γ is regularization term constant;Excitation function selects RBF functions, its form is:
    G(wi,bi, x) and=g (bi||x-wi||) (13);
    S4, training ELM, and tested with test set, draw single node in hidden layer L and regularization term constant γ, obtain ELM most Excellent network model;
    S5, preserves optimal ELM network models, in the case where not changing power distribution network, according to current loads pattern, rapidly exports Switch combination state during loss minimization.
  2. 2. the method according to claim 1 for changing electricity grid network topological structure and selecting paralleling compensating device, its feature It is:The ELM algorithms are as follows:
    Give N number of learning sample matrix (Xj, Yj), ELM corresponds to continuous object function f (Xj), vectorial Xj=[xj1, xj2..., xjn]T∈Rn, vectorial Yj=[yj1, yj2..., yjm]T∈Rm, j=1,2 ... N, and L single hidden layer of given institute tectonic network Node and hidden layer node excitation function g (Xj);
    Then there are βi、wiAnd bi, SLFNs is approached this N number of sample with 0 error, ELM models are by mathematical notation:
    <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    It is applied to the two ELM mathematical models classified:
    <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mi>g</mi> <mo>(</mo> <mrow> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, j=1,2 ..., N;Network inputs weight vectors wi=[wi1, wi2..., win]T, represent input node and i-th it is hidden Connection weight containing node layer;biRepresent the deviation of i-th of hidden layer node;wi·xjRepresent vector wiAnd xjInner product, hidden layer Node parameter wiAnd biProduced at random between [- 1,1];Network output weight vectors βi=[βi1, βi2..., βim]T, represent i-th A hidden layer node and output node connection weight;I=1,2 ..., L, wherein L are single node in hidden layer;
    Represent that N number of formula (1) is by matrix:
    H β=T (3);
    <mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>b</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>N</mi> <mo>&amp;times;</mo> <mi>L</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced>
    <mrow> <mi>&amp;beta;</mi> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mi>&amp;beta;</mi> <mn>1</mn> </msub> <mi>T</mi> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>&amp;beta;</mi> <mi>L</mi> </msub> <mi>T</mi> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>L</mi> <mo>&amp;times;</mo> <mi>m</mi> </mrow> </msub> <mo>,</mo> <mi>T</mi> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mi>t</mi> <mn>1</mn> </msub> <mi>T</mi> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>t</mi> <mi>N</mi> </msub> <mi>T</mi> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>N</mi> <mo>&amp;times;</mo> <mi>m</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Definition H is network hidden layer output matrix;Since L < < N, H are non-square matrix, as any given wiAnd biWhen, by Penrose-Moore broad sense inverse theorems, try to achieve unique solution H-1, then β be:
    β=H-1T (5);
    By linearly most young waiter in a wineshop or an inn's norm and formula (4), obtaining matrix H is:
    <mrow> <mi>H</mi> <mo>=</mo> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>H</mi> </munder> <mo>|</mo> <mo>|</mo> <msup> <mi>HH</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>T</mi> <mo>-</mo> <mi>Y</mi> <mo>|</mo> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, Y=[y1, y2..., yN];
    Solution β is obtained by matrix H and formula (5), so that it is determined that ELM network parameters, complete ELM networks;
    ELM network parameters:Node in hidden layer L, excitation function g (x) and any wi、bi, x refers to any input;
    Consider that empiric risk and fiducial range are minimum at the same time, so that practical risk is minimum, represented with mathematical constraint Optimized model It is then:
    <mrow> <mi>min</mi> <mi> </mi> <mi>J</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <mi>&amp;beta;</mi> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <mi>&amp;epsiv;</mi> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>=</mo> <msub> <mi>&amp;epsiv;</mi> <mi>j</mi> </msub> </mrow> </mtd> </mtr> </mtable> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>...</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein,Representing to be obtained by Edge Distance maximization principle in structural risk minimization, γ is regularization term constant, The quadratic sum ‖ ε ‖ of error2Represent the precision of fitting;
    Formula (7), (8) constrained extremal problem are converted into Lagrange functions to solve:
    <mrow> <mi>l</mi> <mrow> <mo>(</mo> <mi>&amp;beta;</mi> <mo>,</mo> <mi>&amp;epsiv;</mi> <mo>,</mo> <mi>&amp;alpha;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <mi>&amp;beta;</mi> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <mi>&amp;epsiv;</mi> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>&amp;alpha;</mi> <mi>j</mi> </msub> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>&amp;epsiv;</mi> <mi>j</mi> </msub> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    I.e.:
    Wherein α=[α1, α2..., αN] represent Lagrange multipliers;
    Seek the partial derivative of the function and make it equal to 0, obtain minimum condition:
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>l</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>&amp;beta;</mi> </mrow> </mfrac> <mo>=</mo> <msup> <mi>&amp;beta;</mi> <mi>T</mi> </msup> <mo>-</mo> <mi>&amp;alpha;</mi> <mi>H</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>l</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> <mo>=</mo> <msup> <mi>&amp;gamma;&amp;epsiv;</mi> <mi>T</mi> </msup> <mo>+</mo> <mi>&amp;alpha;</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>l</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>&amp;alpha;</mi> </mrow> </mfrac> <mo>=</mo> <mi>H</mi> <mi>&amp;beta;</mi> <mo>-</mo> <mi>Y</mi> <mo>-</mo> <mi>&amp;epsiv;</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Obtained by (11):
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>&amp;alpha;</mi> <mo>=</mo> <mo>-</mo> <mi>&amp;gamma;</mi> <msup> <mrow> <mo>(</mo> <mi>H</mi> <mi>&amp;beta;</mi> <mo>-</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>&amp;beta;</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mi>I</mi> <mi>&amp;gamma;</mi> </mfrac> <mo>+</mo> <msup> <mi>H</mi> <mi>T</mi> </msup> <mi>H</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>H</mi> <mi>T</mi> </msup> <mi>Y</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, I is unit battle array;
    Operating limit learning machine to power distribution network containing small power station carry out voltage control when, first have to determine input vector and export to Amount;Here, input vector is load power consumption and small power station's generated energy, i.e.,:
    X=[x1 x2 … xn]T
    Output vector is the on off state in power distribution network, i.e.,:
    Y=[y1 y2 … ym]T
    For wherein n to be carried out the load in voltage-controlled network and the sum of small power station, m is the switch number in power distribution network Measure, the element y in Y1, y2..., ymRepresented with one group of binary data, 0 represents that switch is opened, and 1 represents switch closure.
CN201510072840.3A 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device Active CN104700205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510072840.3A CN104700205B (en) 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510072840.3A CN104700205B (en) 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device

Publications (2)

Publication Number Publication Date
CN104700205A CN104700205A (en) 2015-06-10
CN104700205B true CN104700205B (en) 2018-05-04

Family

ID=53347298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510072840.3A Active CN104700205B (en) 2015-02-10 2015-02-10 A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device

Country Status (1)

Country Link
CN (1) CN104700205B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160437A (en) * 2015-09-25 2015-12-16 国网浙江省电力公司 Load model prediction method based on extreme learning machine
CN109389253B (en) * 2018-11-09 2022-04-15 国网四川省电力公司电力科学研究院 Power system frequency prediction method after disturbance based on credibility ensemble learning
CN109540522B (en) * 2018-11-16 2020-02-14 北京航空航天大学 Bearing health quantitative modeling method and device and server
CN109951336B (en) * 2019-03-24 2021-05-18 西安电子科技大学 Electric power transportation network optimization method based on gradient descent algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103698699A (en) * 2013-12-06 2014-04-02 西安交通大学 Asynchronous motor fault monitoring and diagnosing method based on model
CN104299043A (en) * 2014-06-13 2015-01-21 国家电网公司 Ultra-short-term load prediction method of extreme learning machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103698699A (en) * 2013-12-06 2014-04-02 西安交通大学 Asynchronous motor fault monitoring and diagnosing method based on model
CN104299043A (en) * 2014-06-13 2015-01-21 国家电网公司 Ultra-short-term load prediction method of extreme learning machine

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A novel visual modeling system for time series forecast: application to the domain of hydrology;Mutao Huang ;Yong Tian;《Journal of hydroinformatics》;20131231;第15卷(第1期);第21-37页 *
ELM算法在微电网超短期负荷预测的应用;徐晟,蒋铁铮,向磊;《电器开关》;20131231(第3期);第70-73,76页 *
小水电综合自动化系统的研究与开发;叶晴炜;《万方学术期刊数据库》;20070910;全文 *

Also Published As

Publication number Publication date
CN104700205A (en) 2015-06-10

Similar Documents

Publication Publication Date Title
CN104037776B (en) The electric network reactive-load capacity collocation method of random inertial factor particle swarm optimization algorithm
CN108599154A (en) A kind of three-phase imbalance power distribution network robust dynamic reconfiguration method considering uncertain budget
CN104700205B (en) A kind of method for changing electricity grid network topological structure and selecting paralleling compensating device
CN105449675A (en) Power network reconfiguration method for optimizing distributed energy access point and access proportion
Tang et al. Study on day-ahead optimal economic operation of active distribution networks based on Kriging model assisted particle swarm optimization with constraint handling techniques
CN103593711B (en) A kind of distributed power source Optimal Configuration Method
Huang et al. Hybrid optimisation method for optimal power flow using flexible AC transmission system devices
CN104377826A (en) Active power distribution network control strategy and method
Yu et al. Distributed multi-step Q (λ) learning for optimal power flow of large-scale power grids
CN103490428B (en) Method and system for allocation of reactive compensation capacity of microgrid
CN104993525B (en) A kind of active distribution network coordinating and optimizing control method of meter and ZIP loads
CN103455948B (en) A kind of distribution system multi-dimensional multi-resolution Modeling and the method for analysis
CN108717608A (en) Million kilowatt beach photovoltaic plant accesses electric network synthetic decision-making technique and system
CN102163845B (en) Optimal configuration method of distributed generations (DG) based on power moment algorithm
CN105870968A (en) Three-phase imbalance reactive voltage control method metering system negative sequence voltage
CN104578091B (en) The no-delay OPTIMAL REACTIVE POWER coordinated control system and method for a kind of power network containing multi-source
CN104201671A (en) Static voltage stability assessment method of three-phase unbalanced power distribution network including wind power
CN105529703B (en) A kind of urban network reconstruction planing method based on power supply capacity bottleneck analysis
CN106099939A (en) A kind of transformer station reactive apparatus affects the computational methods of sensitivity to busbar voltage
CN103346573B (en) Planing method that wind power system based on golden section cloud particle swarm optimization algorithm is idle
CN100483888C (en) Economic adjusting and control method for top layer of the static mixed automatic voltage control
CN105071397A (en) Coordinated reactive voltage control method of different reactive compensation devices of wind power delivery
CN105896613B (en) A kind of micro-capacitance sensor distribution finite-time control method for considering communication time lag
CN107465195B (en) Optimal power flow double-layer iteration method based on micro-grid combined power flow calculation
Cao et al. Opposition-based improved pso for optimal reactive power dispatch and voltage control

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant