CN107169612A - The prediction of wind turbine active power and error revising method based on neutral net - Google Patents
The prediction of wind turbine active power and error revising method based on neutral net Download PDFInfo
- Publication number
- CN107169612A CN107169612A CN201710473103.3A CN201710473103A CN107169612A CN 107169612 A CN107169612 A CN 107169612A CN 201710473103 A CN201710473103 A CN 201710473103A CN 107169612 A CN107169612 A CN 107169612A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- mtr
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000007935 neutral effect Effects 0.000 title claims abstract description 25
- 239000002245 particle Substances 0.000 claims abstract description 75
- 238000013528 artificial neural network Methods 0.000 claims abstract description 63
- 230000001537 neural effect Effects 0.000 claims abstract description 48
- 238000005457 optimization Methods 0.000 claims abstract description 28
- 238000003062 neural network model Methods 0.000 claims abstract description 7
- 238000010183 spectrum analysis Methods 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 54
- 238000012360 testing method Methods 0.000 claims description 53
- 239000011159 matrix material Substances 0.000 claims description 36
- 210000002569 neuron Anatomy 0.000 claims description 30
- 230000008859 change Effects 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 21
- 230000001133 acceleration Effects 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 6
- 239000002773 nucleotide Substances 0.000 claims description 6
- 125000003729 nucleotide group Chemical group 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 abstract description 6
- 238000002474 experimental method Methods 0.000 description 18
- 230000005611 electricity Effects 0.000 description 16
- 238000001228 spectrum Methods 0.000 description 14
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 10
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 10
- 230000009466 transformation Effects 0.000 description 10
- 238000004458 analytical method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000002592 echocardiography Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000009434 installation Methods 0.000 description 4
- 230000035772 mutation Effects 0.000 description 4
- 230000000737 periodic effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 210000005036 nerve Anatomy 0.000 description 3
- 238000000053 physical method Methods 0.000 description 3
- 238000007619 statistical method Methods 0.000 description 3
- 108010076504 Protein Sorting Signals Proteins 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000005684 electric field Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 210000004218 nerve net Anatomy 0.000 description 2
- 238000010248 power generation Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 125000002015 acyclic group Chemical group 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000005619 thermoelectricity Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Software Systems (AREA)
- Tourism & Hospitality (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Business, Economics & Management (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Marketing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Primary Health Care (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention discloses a kind of wind turbine active power prediction based on neutral net and error revising method, Wavelet spectral analysis method is used first, extract the harmonic compoment sequence implied in wind turbine active power time series and isolated residual sequence, then harmonic compoment sequence and residual sequence are predicted using neural network model respectively, wherein harmonic compoment sequence uses the BP neural network corrected based on particle cluster algorithm optimization and overlay error to be predicted, the RBF neural that residual sequence is optimized using particle cluster algorithm and overlay error is corrected is predicted, it can be obtained final wind turbine active power by predicting the outcome for harmonic compoment sequence and residual sequence and predicted the outcome.The present invention is realized to carrying out fine forecast per typhoon motor active power in wind power plant, so as to effectively improve the short-term forecast level of exerting oneself of whole wind power plant.
Description
Technical field
The invention belongs to wind energy forecast field, and in particular to it is a kind of based on neutral net wind turbine active power prediction and
Error revising method.
Background technology
In effectively wind energy is connected to the grid, it is extremely necessary and closes for the progress accurate forecast of exerting oneself of wind power plant
Key, among these, the short-period forecast of 0 to 6 hour is for power network Real-Time Scheduling, it is ensured that mains frequency, power and balance of voltage etc.
The technical parameter for being related to power grid security is significant.
Wind energy is as a kind of reproducible clean energy resource, and with installation scaleable, wind power generator group reliability is high, make
The advantages of valency is low, operation maintenance is simple.Announced according to 2 months 2015 National Energy Boards《Wind Power Generation Industry monitors situation within 2014》,
By the end of the year 2014, China's wind-powered electricity generation adds up installed capacity and has reached 96,370,000 kilowatts, accounts for the 7% of whole capacities of installed generator, accounts for
The 27% of global wind-powered electricity generation installation.The kilowatt hour of wind-powered electricity generation electricity volume 153,400,000,000 in 2014, accounts for the 2.78% of whole generated energy.2014
National Energy Board's issue in December《Energy development Strategic Action Plan (2014-2020)》, it is contemplated that to the year two thousand twenty, wind-powered electricity generation installation will
Reach 200,000,000 kilowatts.So far, wind-powered electricity generation turns into the third-largest main force's power supply of China after thermoelectricity and water power.With installation
Capacity is continuously increased, and the electric problem of abandoning of wind-powered electricity generation is more protruded always, is counted according to National Energy Board, wind-powered electricity generation amount is abandoned in the whole nation within 2012
About 20,000,000,000 kilowatt hours, averagely abandon wind rate and reach 17%;The kilowatt hour of wind-powered electricity generation amount about 15,000,000,000 is abandoned in the whole nation in 2013, is averagely abandoned wind rate and is reached
To 10%, newest statistics is shown, by by the end of September, 2014, and wind-powered electricity generation abandons the kilowatt hour of wind-powered electricity generation amount 8,600,000,000, averagely abandons wind rate 7.5%.
A major reason for causing wind-powered electricity generation to abandon electricity is that the intermittence of wind causes the fluctuation and unstability of wind-powered electricity generation to have impact on wind-powered electricity generation
Quality, electricity is abandoned in order to ensure the safety of power network in vain.Based on this, National Energy Board issued in 2011《Wind power is pre-
Survey forecast management Tentative Measures》, it is desirable to all wind parks being incorporated into the power networks of China should be set up before 1 day January in 2012 and blow
Electric prediction system and generation schedule declaration work mechanism simultaneously start trial operation, report and submit wind power prediction to forecast as requested
As a result.
The active power forecast common method of current wind power plant includes physical method and statistical method, and physical method refers to root
The timing of high-spatial and temporal resolution, fixed point, quantitative numerical weather prediction model wind are obtained according to the numerical weather prediction model that becomes more meticulous
Power predicts output result, while running actual conditions according to wind electric field blower, considers various wind turbine power generation influence factors, builds
Vertical prediction physical model of exerting oneself, carries out output of wind electric field prediction.Physical method does not need substantial amounts of measurement data, but requires to big
The physical characteristic of gas and the characteristic of wind power plant wind turbine have accurate mathematical description, and these equation solutions are difficult, required data sea
Amount, computationally intensive, the calculating time is long, and the difficulty for obtaining data from meteorological department is big, costly, therefore in short-term wind-electricity
In active power forecast, statistical method is still commonly used.At present, statistical method is adopted mostly according to the historical summary of wind power plant anemometer tower
With the methods such as lasting method, Random time sequence method, Kalman filtering method, neural network, SVMs using wind power plant as
One entirety is predicted, and such a method disadvantage is that wind power plant is influenceed by landform, turbulent flow and wind turbine operating mode etc., must
Larger prediction error will be caused, this is unrelated with specific forecasting procedure.With e measurement technology and computer computation ability
Improve, the active power to separate unit generator cabin become more meticulous carries out forecast and is possibly realized.
The content of the invention
In view of the above-mentioned problems, the present invention proposes a kind of wind turbine active power prediction based on neutral net and error revising
Method, by carrying out fine forecast to the active power in wind power plant per typhoon motor, so as to effectively improve whole wind power plant
Short-term forecast level of exerting oneself.
Above-mentioned technical purpose is realized, above-mentioned technique effect is reached, the present invention is achieved through the following technical solutions:
It is a kind of based on neutral net wind turbine active power prediction and error revising method, it is characterised in that including with
Lower step:
(1) the crude sampling active power time series p={ p (i), i=1,2 ..., N } of wind turbine is read in, wherein N is
Original wind turbine active power sampled point number;P is adjusted to the average active power time series p ' by forecast space requirement
={ p ' (j), j=1,2 ..., M }, wherein M are by the wind turbine average active power sequence after forecast space requirement adjustment
Sampled point number, p ' average value isOrder
(2) multi-scale wavelet power spectrum analysis method is used, the harmonic compoment sequence { P in P is extracted1,P2,…,Pk,…,
PK, wherein K is the number of the harmonic compoment sequence in P, Pk={ Pk(1),Pk(2),…,Pk(M) }, thus P=P1+P2+…+PK
+ R, wherein R=P-P1-P2-…-PK, it is that the residual sequence after harmonic compoment sequence is rejected in P;
(3) to the harmonic compoment sequence { P in P1,P2,…,Pk,…,PK, particle group optimizing and overlay error is respectively adopted
The BP neural network corrected is predicted, and sets prediction step as l, then each harmonic compoment sequence { P1,P2,…,Pk,…,PK
Predict the outcome forWherein
(4) the RBF nerve nets that the first-order difference sequence D to residual sequence R is corrected using particle group optimizing and overlay error
Network is predicted, and predicting the outcome for residual sequence is obtained by difference inverse operation, set prediction step as l, then residual sequence R
Predict the outcome as YR={ YR(1),YR(2),...,YR(l)};
(5) willIt is added with each periodic signal sequence, predicting the outcome for residual sequence, obtains the final Y that predicts the outcome,
Further, the detailed process of the BP neural network of the particle group optimizing employed in the step (3) is:
(1) according to Kolmogorov theorems, 3 layers of BP neural network model are set up, it is hidden if input layer number is I
Neuron number containing layer is H, and output layer neuron number is O;Wherein, H=2*I+1, O=1;
(2) parameter for needing to optimize is determined, including:The input layer number I and the length of training set of BP neural network
L is spent, in addition to:One group objects W=(w (1), w (2) ..., w (q)), q=I*H+H*O+H+O, wherein, w (1)~w (I*H) is
The input layer of BP neural network is to the link weights of hidden layer neuron, and w (I*H+1)~w (I*H+H*O) is BP neural network
Hidden layer is to the link weights of output layer neuron, and w (I*H+H*O+1)~w (I*H+H*O+H) is BP neural network hidden layer god
Threshold value through member, w (I*H+H*O+H+1)~w (I*H+H*O+H+O) is the threshold value of BP neural network output layer neuron;
(3) population is initializedWherein Q1For the sum of particle, i-th of particle is Xi=(Ii,
Wi,Li), particle rapidity is Vi=(v_Ii,v_Wi,v_Li), wherein Ii、Wi、LiFor parameter I, W, L, mono- group alternatively solves;
(4) to each particle X in colonyi=(Ii,Wi,Li) determine parameter, construction BP neural network training set it is defeated
Enter and output matrix, wherein for harmonic compoment sequence PkAnd BP neural network input layer number IiInitially set up matrix
Z1And Z2, wherein:
For neural metwork training collection length L, Z to be optimized1In last LiArrange the input matrix I as training settrain,
Z2In last LiArrange the output matrix O as training settrain;It regard forecast step-length l as test step-length, Z1In last l row make
For the input matrix I of test settest, Z2In last l arrange output matrix O as test settest;Constructed according to training set
BP neural network to the error sum of squares of test set analog result as its fitness value, with the minimum optimization direction of fitness value
The quality of each particle, record particle X are judged as evaluation criterioniCurrent individual extreme value is Pbest(i) P in colony, is takenbest(i)
Optimal individual is used as overall extreme value Gbest;
(5) each particle X in colonyi, its position and speed are updated respectively;
In formula:ω is inertia weight, c1、c2For acceleration factor, g is current iteration number of times, and r1、r2To be distributed in [0,
1] random number;
(6) target function value of each particle now is recalculated, P is updatedbestAnd G (i)best;
(7) judge whether to reach maximum iteration, terminate optimization process if meeting, obtain BP neural network and most preferably join
Number Ibest、Wbest(wbest(1),wbest(2),...,wbest(q))、Lbest, otherwise return to step (4).
Further, the detailed process of the overlay error correction method employed in the step (3) is:
(1) first by the BP neural network parameter I of optimizationbest、Wbest(wbest(1),wbest(2),...,wbest(q))、
LbestConstructing neural network training set Z3With test set Z4And BP neural network link weights and threshold value are initialized, wherein:
wbest(1)~wbest(I*H) the initial of weights is linked to hidden layer neuron for the input layer of BP neural network
Value, wbest(I*H+1)~wbest(I*H+H*O) the first of weights is linked to output layer neuron for the hidden layer of BP neural network
Initial value, wbest(I*H+H*O+1)~wbest(I*H+H*O+H) for BP neural network hidden layer neuron threshold value initial value,
wbest(I*H+H*O+H+1)~wbest(I*H+H*O+H+O) for BP neural network output layer neuron threshold value initial value;
(2) defined nucleotide sequence PkIt is c=in sampled point i rate of change | pk(i)-pk(i-1) |, for prediction step l, calculate
Z4The maximum rate of change of last l step-lengths is cmax, the l step predictions that just can be iterated afterwards, during prediction, with cmaxMake
The output valve of neutral net during for jth step-ahead predictionWith previous predicted value or actual valueMaximum change
Rate, ifThenIt is used as the predicted value of jth step-length;If otherwiseThenIfThen
Further, the detailed process of the particle group optimizing RBF neural employed in the step (4) is:
(1) determine to need Optimal Parameters, including:RBF neural input layer number I and training set length L;
(2) population is initializedWherein Q2For the sum of particle, i-th of particle is Xi=(Ii,
Li), particle rapidity isWherein Ii,LiFor parameter I, L, mono- group alternatively solves;
(3) each particle X in colonyi(Ii,Li) determine parameter, construct RBF neural training set input
And output matrix, wherein for residual sequence R and RBF neural input layer number IiInitially set up matrix Z5And Z6,
Wherein:
For neural metwork training collection length L, Z to be optimized5In last LiArrange the input matrix I as training settrain,
Z6In last LiArrange the output matrix O as training settrain;It regard forecast step-length l as test step-length, Z5In last l row make
For the input matrix I of test settest, Z6In last l arrange output matrix O as test settest;Constructed according to training set
RBF neural to the error sum of squares of test set analog result as its fitness value, with the minimum optimization side of fitness value
To the quality that each particle is judged as evaluation criterion, record particle XiCurrent individual extreme value is Pbest(i) P in colony, is takenbest
(i) optimal individual is used as overall extreme value Gbest;
(4) each particle X in colonyi, its position and speed are updated respectively;
In formula:ω is inertia weight, c1、c2For acceleration factor, g is current iteration number of times, and r1、r2To be distributed in [0,
1] random number;
(5) target function value of each particle now is recalculated, P is updatedbestAnd G (i)best;
(6) judge whether to reach maximum iteration, terminate optimization process if meeting, obtain RBF neural optimal
Parameter IbestAnd Lbest, otherwise return to step (3).
Further, the detailed process of the overlay error correction method employed in the step (4) is:
First by the RBF neural parameter I of optimizationbestAnd LbestConstructing neural network training set Z7With test set Z8And
Initialize BP neural network and link weights and threshold value, wherein:
(2) defined nucleotide sequence R is c=in sampled point i rate of change | R (i)-R (i-1) |, for prediction step l, calculate Z8
The maximum rate of change of last l step-lengths is cmax, the l step predictions that just can be iterated afterwards, during prediction, with cmaxAs
The output valve Y of neutral net during jth step-ahead predictionR(j) with previous predicted value or actual value YR(j-1) maximum rate of change, if
|YR(j)-YR(j-1)|<cmax, then YR(j) as the predicted value of jth step-length;If otherwise YR(j)>YR(j-1), then YR(j)=YR
(j-1)+cmaxIf, YR(j)<YR(j-1), then YR(j)=YR(j-1)-cmax。
Further, inertia weight ω=0.5, acceleration factor c1=c2=1.49445.
Beneficial effects of the present invention:
(1) the harmonic compoment sequence for the wind turbine active power extracted through multiscale analysis is due to regular strong, Er Qie
Proportion is larger in original work(power sequence, therefore has established the basis of degree of precision prediction;Eliminate harmonic compoment sequence
Residual sequence afterwards is little due to the proportion in entirety, therefore its predicated error is relatively limited, therefore proposed by the invention
By original active power sequence through multiscale analysis, harmonic compoment sequence and residual sequence are decomposed into, and then to each sequence point
The thinking not being predicted can greatly improve overall prediction effect.
(2) because synoptic process change is continuous, and not mutated, this is showed extremely in harmonic compoment sequence
Protrude, the although smooth smooth-going of harmonic compoment sequence, but the amplitude and phase of wherein sequence are changed over time substantially, and this can influence god
Study and Generalization Capability through network, cause to predict value mutation, precision of prediction declines, continuity of the application based on Changes in weather
Feature, by maximum rate of change index come the behavior of constrained forecast value mutation, through experiment, this technology can be effectively improved nerve net
The catastrophic behavior of network predicted value, improves precision of prediction, and the method is also applied in the error revising of residual sequence simultaneously.
(3) influence differed for prediction performance is selected for neural network structure, the present invention proposes have for wind turbine
The characteristics of harmonic compoment sequence that work(power sequence is extracted and the residual sequence of separation, BP neural network and RBF god is respectively adopted
Through network, for the structural parameters of neutral net, training set scale carries out excellent method using particle cluster algorithm, significantly improves god
Generalization Capability through network, finally improves precision of prediction.
Brief description of the drawings
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is minute rank wind turbine active power sequence;
Fig. 3 is sequence P Wavelet spectral analysis result figure;
Fig. 4 is the residual sequence figure of the harmonic compoment sequence that sequence P is extracted and separation;
Fig. 5 (a) is the one-step prediction result of the inventive method;
Fig. 5 (b) predicts the outcome for two steps of the inventive method;
Fig. 5 (c) predicts the outcome for three steps of the inventive method;;
Fig. 6 (a) is that maximum rate of change index is directed to harmonic compoment sequence P5Predict the outcome contrast;
Fig. 6 (b) is that maximum rate of change index predicts the outcome contrast for residual error R;
Fig. 7 (a) is to set up particle cluster algorithm optimization RBF neural one for minute rank wind turbine active power sequence
Step predicts the outcome;
Fig. 7 (b) is to set up particle cluster algorithm optimization RBF neural two for minute rank wind turbine active power sequence
Step predicts the outcome;
Fig. 7 (c) is to set up particle cluster algorithm optimization RBF neural three for minute rank wind turbine active power sequence
Step predicts the outcome;
Fig. 8 is minute rank wind turbine active power sequence db3 wavelet decomposition results in contrast experiment;
Fig. 9 (a) is to fix wavelet decomposition-RBF neural one-step prediction knot of RBF neural parameter in contrast experiment
Really;
Fig. 9 (b) is to fix wavelet decomposition-RBF neural one-step prediction knot of RBF neural parameter in contrast experiment
Really;
Fig. 9 (c) is to fix wavelet decomposition-RBF neural one-step prediction knot of RBF neural parameter in contrast experiment
Really;
Figure 10 (a) is the wavelet decomposition-RBF god in contrast experiment using particle cluster algorithm optimization RBF neural parameter
Through network one-step prediction result;
Figure 10 (b) is the wavelet decomposition-RBF god in contrast experiment using particle cluster algorithm optimization RBF neural parameter
Predicted the outcome through the step of network two;
Figure 10 (c) is the wavelet decomposition-RBF god in contrast experiment using particle cluster algorithm optimization RBF neural parameter
Predicted the outcome through the step of network three.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to embodiments, to the present invention
It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to
Limit the present invention.
The application principle of the present invention is explained in detail below in conjunction with the accompanying drawings.
As shown in figure 1, a kind of prediction of wind turbine active power and error revising method based on neutral net, including it is following
Step:
S1:The crude sampling active power time series p={ p (i), i=1,2 ..., N } of wind turbine is read in, wherein N is
Original wind turbine active power sampled point number;P is adjusted to the average active power time series p ' by forecast space requirement
={ p ' (j), j=1,2 ..., M }, wherein M are by the wind turbine average active power sequence after forecast space requirement adjustment
Sampled point number, p ' average value isOrder
S2:Using multi-scale wavelet power spectrum analysis method, the harmonic compoment sequence { P in P is extracted1,P2,…,Pk,…,
PK, wherein K is the number of the harmonic compoment sequence in P, Pk={ Pk(1),Pk(2),…,Pk(M) }, thus P=P1+P2+…+PK
+ R, wherein R=P-P1-P2-…-PK, it is that the residual sequence after harmonic compoment sequence is rejected in P;
The use multi-scale wavelet power spectrum analysis method, extracts the harmonic compoment sequence { P in P1,P2,…,Pk,…,
PK, detailed process is:
Assuming that a discrete-time series xn, wherein n=1 ..., N, common N number of sampled point, sampling time interval δ t=1 are answered
Morlet wavelet transformations are used, the harmonic compoment of the time series is analyzed, the corresponding time series of each harmonic compoment band is extracted.
2.1 determine analytical cycle
The cycle T of wavelet transformationj(Fig. 3 abscissa values) and wavelet analysis mesoscale parameter sjIt is relevant, it is considered to which that mother Morlet is small
Ripple center cyclophysis, herein Tj=sj.The selection of scale parameter is:sj=2jδj, wherein δ j=1/4, j=0,1 ..., J, altogether
The maximum of J+1 yardstick, wherein J is no more than Jmax=4log2(N), J=48 here.
2.2 determine global wavelet transformation spectrum
In the n-th sampled point, sjThe corresponding local wavelet transformation spectrum W of scale parametern(sj) be:
Wherein ψ*() is ψ () conjugate function, to the n-th sampled point, scale parameter sjMorlet small echo mother wavelet letters
Number is:
To Wn(sj) mould | Wn(sj) | integrated along whole sampling interval, obtain scale parameter sjCorresponding global wavelet transformation
SpectrumI.e.:
The present invention is using the global wavelet transformation spectrum standardized(Fig. 3 solid lines), wherein σ2For xnSide
Difference.
2.3 global wavelet transformation spectrum significance tests
The maximum corresponding cycle of global wavelet transformation spectrum curve is generally set to the major cycle, but whether significantly,
To pass through significance test.Here global wavelet transformation spectrum obtained as above and red noise spectrum are compared, judge that it shows
Work property, wherein red noise spectrum QkIt is expressed as:
Wherein α is xnTime series falls behind the auto-correlation coefficient of a sampled point, k=0,1 ..., N/2.
Assuming that global wavelet transformation spectrum is a certain acyclic process spectrum, then its defer to the ratio between with red noise spectrum by from
Removed by spending νDistribution:
The wherein free degreeγ is the decorrelation factor, to Morlet small echos, γ=2.32.Herein
0.05 significance is taken, whenWhen, the global wavelet transformation spectrum corresponding cycle be it is significant, its
InFor Fig. 3 dotted lines.
2.4 extract the corresponding time series of harmonic compoment band
Extract certain specific period band i.e. [T1,T2] corresponding time series x 'n, specific period band [T is known by (1)1, T2] correspondence
Scale parameter beFor Morlet small echos, extractThe corresponding time series of scale parameter is to this
Scale parameter band correspondenceSummed, i.e.,:
Wherein, whereinFor Wn(sj) real part, for Morlet small echos, ψ0(0)=π-1/4, Cδ=0.776.
S3:To the harmonic compoment sequence { P in P1,P2,…,Pk,…,PK, particle group optimizing and overlay error is respectively adopted
The BP neural network corrected is predicted, and sets prediction step as l, then each harmonic compoment sequence { P1,P2,…,Pk,…,PK
Predict the outcome forWherein
The detailed process of the BP neural network of particle group optimizing employed in the step S3 is:
(1) according to Kolmogorov theorems, 3 layers of BP neural network model are set up, it is hidden if input layer number is I
Neuron number containing layer is H, and output layer neuron number is O;Wherein, H=2*I+1, O=1;
(2) parameter for needing to optimize is determined, including:The input layer number I and the length of training set of BP neural network
L is spent, in addition to:One group objects W=(w (1), w (2) ..., w (q)), q=I*H+H*O+H+O, wherein, w (1)~w (I*H) is
The input layer of BP neural network is to the link weights of hidden layer neuron, and w (I*H+1)~w (I*H+H*O) is BP nerves
The hidden layer neuron of network is to the link weights of output layer neuron, and w (I*H+H*O+1)~w (I*H+H*O+H) is BP nerves
The threshold value of network hidden layer neuron, w (I*H+H*O+H+1)~w (I*H+H*O+H+O) is BP neural network output layer neuron
Threshold value;
(3) population is initializedWherein Q1For the sum of particle, i-th of particle is Xi=(Ii,
Wi,Li), particle rapidity is Vi=(v_Ii,v_Wi,v_Li), wherein Ii、Wi、LiFor parameter I, W, L, mono- group alternatively solves;
(4) to each particle X in colonyi=(Ii,Wi,Li) determine parameter, construction BP neural network training set it is defeated
Enter and output matrix, wherein for harmonic compoment sequence PkAnd BP neural network input layer number IiInitially set up matrix
Z1And Z2, wherein:
For neural metwork training collection length L, Z to be optimized1In last LiArrange the input matrix I as training settrain,
Z2In last LiArrange the output matrix O as training settrain;It regard forecast step-length l as test step-length, Z1In last l row make
For the input matrix I of test settest, Z2In last l arrange output matrix O as test settest;Constructed according to training set
BP neural network to the error sum of squares of test set analog result as its fitness value, with the minimum optimization direction of fitness value
The quality of each particle, record particle X are judged as evaluation criterioniCurrent individual extreme value is Pbest(i) P in colony, is takenbest(i)
Optimal individual is used as overall extreme value Gbest;
(5) each particle X in colonyi, its position and speed are updated respectively;
In formula:ω is inertia weight, c1、c2For acceleration factor, g is current iteration number of times, and r1、r2To be distributed in [0,
1] random number;
(6) target function value of each particle now is recalculated, P is updatedbestAnd G (i)best;
(7) judge whether to reach maximum iteration, terminate optimization process if meeting, obtain BP neural network and most preferably join
Number Ibest、Wbest(wbest(1),wbest(2),...,wbest(q))、Lbest, otherwise return to step (4).
The detailed process of overlay error correction method employed in the step S3 is
(1) first by the BP neural network parameter I of optimizationbest、Wbest(wbest(1),wbest(2),...,wbest(q))、
LbestConstructing neural network training set Z3With test set Z4And BP neural network link weights and threshold value are initialized, wherein:
wbest(1)~wbest(I*H) the initial of weights is linked to hidden layer neuron for the input layer of BP neural network
Value, wbest(I*H+1)~wbest(I*H+H*O) the first of weights is linked to output layer neuron for the hidden layer of BP neural network
Initial value, wbest(I*H+H*O+1)~wbest(I*H+H*O+H) for BP neural network hidden layer neuron threshold value initial value,
wbest(I*H+H*O+H+1)~wbest(I*H+H*O+H+O) for BP neural network output layer neuron threshold value initial value;
(2) defined nucleotide sequence PkIt is c=in sampled point i rate of change | pk(i)-pk(i-1) |, for prediction step l, calculate
Z4The maximum rate of change of last l step-lengths is cmax, the l step predictions that just can be iterated afterwards, during prediction, with cmaxMake
The output valve of neutral net during for jth step-ahead predictionWith previous predicted value or actual valueMaximum change
Rate, ifThenIt is used as the predicted value of jth step-length;If otherwiseThenIfThen
The RBF neural that S4 is corrected to residual sequence R first-order difference sequence D using particle group optimizing and overlay error
It is predicted, predicting the outcome for residual sequence is obtained by difference inverse operation, set prediction step as l, then residual sequence R's is pre-
Survey result is YR={ YR(1),YR(2),...,YR(l)};
The detailed process of particle group optimizing RBF neural employed in the step S4 is:
(1) determine to need Optimal Parameters, including:RBF neural input layer number I and training set length L;
(2) population is initializedWherein Q2For the sum of particle, i-th of particle is Xi=(Ii,
Li), particle rapidity isWherein Ii,LiFor parameter I, L, mono- group alternatively solves;
(3) each particle X in colonyi(Ii,Li) determine parameter, construct RBF neural training set input
And output matrix, wherein for residual sequence R and RBF neural input layer number IiInitially set up matrix Z5And Z6,
Wherein:
For neural metwork training collection length L, Z to be optimized5In last LiArrange the input matrix I as training settrain,
Z6In last LiArrange the output matrix O as training settrain;It regard forecast step-length l as test step-length, Z5In last l row make
For the input matrix I of test settest, Z6In last l arrange output matrix O as test settest;Constructed according to training set
RBF neural to the error sum of squares of test set analog result as its fitness value, with the minimum optimization side of fitness value
To the quality that each particle is judged as evaluation criterion, record particle XiCurrent individual extreme value is Pbest(i) P in colony, is takenbest
(i) optimal individual is used as overall extreme value Gbest;
(4) each particle X in colonyi, its position and speed are updated respectively;
In formula:ω is inertia weight, c1、c2For acceleration factor, g is current iteration number of times, and r1、r2To be distributed in [0,
1] random number;
(5) target function value of each particle now is recalculated, P is updatedbestAnd G (i)best;
(6) judge whether to reach maximum iteration, terminate optimization process if meeting, obtain RBF neural optimal
Parameter IbestAnd Lbest, otherwise return to step (3).
The detailed process of overlay error correction method employed in the step (4) is:
First by the RBF neural parameter I of optimizationbestAnd LbestConstructing neural network training set Z7With test set Z8And
Initialize BP neural network and link weights and threshold value, wherein:
(2) defined nucleotide sequence R is c=in sampled point i rate of change | R (i)-R (i-1) |, for prediction step l, calculate Z8
The maximum rate of change of last l step-lengths is cmax, the l step predictions that just can be iterated afterwards, during prediction, with cmaxAs
The output valve Y of neutral net during jth step-ahead predictionR(j) with previous predicted value or actual value YR(j-1) maximum rate of change, if
|YR(j)-YR(j-1)|<cmax, then YR(j) as the predicted value of jth step-length;If otherwise YR(j)>YR(j-1), then YR(j)=YR
(j-1)+cmaxIf, YR(j)<YR(j-1), then YR(j)=YR(j-1)-cmax。
Inertia weight ω=0.5, acceleration factor c1=c2=1.49445.
(5) willIt is added with each periodic signal sequence, predicting the outcome for residual sequence, obtains the final Y that predicts the outcome,
Specific test case:
The flow chart shown by Fig. 1, take China's wind power plant typhoon motor from 5 days 10 October in 2015 when 07 point 53
Second starts the original active power time series of the second rank of collection, because the ultra-short term for minute rank that this example shows is pre-
Report, therefore pressed first by forecast space requirementOriginal active power time series is adjusted to obtain
The average active power time series of minute rank is obtained, as shown in Fig. 2 wherein p (i) is the original active power that wind turbine is gathered
Time series (second level, but sampling interval disunity), p ' (j) is the average active power time sequence of the minute rank after adjustment
Row, t and z refer to the sequence number of this minute terminal corresponding sampled point in original work(power time series respectively.This test is real
Example takes preceding 2150 points in p ' (j) to be training data, and development is 1 step, 2 steps and the 3 step prognostic experiments of 50 step-lengths by a definite date, and with
Percentage ratio error MAPE is the validity of this algorithm of standard testing:
Wherein, Y (i) and p ' (i) are respectively prediction wind turbine active power value and sampled value, and l is prediction step.
The wind turbine active power sequence for rejecting the minute rank after average is designated as P, and Fig. 3 show the small echo work(for P
Rate analysis of spectrum result, the red noise measuring line using 5% significance is threshold value, during the average minute active power of the wind turbine
Between sequence have 4096,2048,1218,609 and 256 be extreme point totally 5 harmonic compoments, take its extreme point each side
First periodic point less than red noise measuring line, constitutes half period zones, and this half period zones is harmonic compoment band, by taking this example as an example, is deposited
In [2896.3,4871], [1722.2,2435.5], [861.1,1722.2], [430.5,724.1] and [215.3,304.4] altogether
5 harmonic compoment bands, according to wavelet reconstruction method, extract the corresponding time series P of this 5 half period zones1、P2、P3、P4And P5, and
Corresponding residual sequence R is obtained, thus P=P1+P2+P3+P4+P5+ R, is shown in Fig. 4.It can be seen that, the regularity of 5 harmonic compoment signals
By force, it is expected to the prediction of degree of precision;On the other hand, although the predicated error for residual error is inevitable, but is computed, residual error R
Energy (variance) accounting P energy (variance) be 40.80%, decline notable, therefore, the predicated error for residual error will be much
Less than the error being directly predicted for P.
Although neutral net has powerful nonlinear fitting ability and quick learning ability, how to select appropriate
Neural network model, determines that structure, training set and the test set of neutral net still gather mainly by artificial experience or examination, its is pervasive
Property is poor.Found by the analysis of the harmonic compoment signal to extraction, particularly P3、P4And P5Sequence, although with obvious week
Phase Variation Features, and sequence smoothes out, but the amplitude and phase of wherein sequence change over time substantially, and this can influence neutral net
Study and Generalization Capability, cause to predict value mutation, precision of prediction declines, so on the one hand present invention selection fault-tolerant ability is stronger
BP neural network, on the other hand continuity Characteristics based on Changes in weather, by introducing maximum rate of change index come constrained forecast
The behavior of value mutation;And the fluctuation around 0 axle is presented in sequence of the residual signals after first-order difference, more suitable for RBF nerves
Network, therefore the present invention is to the harmonic compoment signal P of extraction1~P5Using based on particle cluster algorithm optimization, simultaneously overlay error is corrected
BP neural network be predicted, and for residual signals R then using being optimized based on particle cluster algorithm and overlay error is corrected
RBF neural is predicted, through experiment, and this technology can be effectively improved the catastrophic behavior of neural network prediction value, improves pre-
Survey precision.
To P1~P5Using the BP neural network model optimized based on particle cluster algorithm, the model of input layer number is taken
Enclose for [5,14], the length of training set is [50,2100], the scope of neural network weight and threshold value is [- 3,3], population kind
Group's scale is 50, iteration 30 times.For R then using the RBF neural optimized based on particle cluster algorithm, input layer is taken
The scope of number is [5,20], and the length of training set is [50,2100], and population population scale is 50, iteration 30 times, the institute of table 1
When being shown as carrying out the prediction of 3 steps, for harmonic compoment sequence P1~P5With residual error R input layer number I and training set length
The optimum results of two parameters of L, for P1~P5The BP neural network weights of foundation and the optimum results of threshold value are excessive due to parameter
Without listing one by one.
Table 1
This test case has carried out 1 step, 2 steps and the 3 step prognostic experiments that total prediction step is 50, predicts the outcome such as Fig. 5
(a) shown in-(c), table 2 counts for predicated error.
Table 2
1 step is predicted | 2 steps are predicted | 3 steps are predicted | |
MAPE | 0.0635 | 0.0730 | 0.1248 |
Contrast experiment 1
Influence of the maximum rate of change index for precision of prediction is tested, because maximum rate of change index is aobvious both for each
Write periodic sequence P1~P5Or residual sequence R is individually configured, therefore this sentences P5It is illustrated with exemplified by R.Fig. 6 (a)
It show for P5Total prediction step for 50 1 step prediction contrast experiment, discovery take after maximum rate of change index, its MAPE
Error is down to 0.0032 by 0.0093, and the flatness of prediction curve is guaranteed, and precision of prediction improves 190.63%;Fig. 6
(b) 1 step that the total prediction step showing for R is 50 predicts contrast experiment, and discovery is taken after maximum rate of change index, its
MAPE errors are down to 0.9926 by 1.0268, and precision of prediction improves 3.45%
Contrast experiment 2
For the wind turbine active power sequence of minute rank, first difference computing is directly carried out, afterwards using population
Optimization RBF neural network model is modeled, and the scope for taking input layer number is [5,20], and the length of training set is
[50,2100], population population scale is 50, iteration 30 times, when table 3 show progress 3 steps prediction, for input layer
Number I and training set length two parameters of L optimum results:
Table 3
Input layer number (I) | Training set length (L) |
12 | 169 |
This test case has carried out 1 step, 2 steps and the 3 step prognostic experiments that total prediction step is 50, predicts the outcome such as Fig. 7
(a) shown in-(c), table 4 counts for predicated error.
Table 4
1 step is predicted | 2 steps are predicted | 3 steps are predicted | |
MAPE | 0.2727 | 0.4825 | 0.4715 |
The average MAPE errors of its 1-3 steps add 369.46% compared with table 2, and what the original wind turbine of this description of test was gathered has
Work(power data is violent due to fluctuation, regular poor, is directly used in neural net model establishing effect bad.
Contrast experiment 3
In addition, for the validity of verification method, being compared present invention is alternatively directed to a kind of conventional wavelet analysis method
Relatively test, the method for conventional wavelet analysis is that former sequence is decomposed into relatively smooth low frequency trend sequence and high-frequency fluctuation sequence
Row, are individually predicted to each sequence afterwards, by predicted value carry out it is cumulative be it is final predict the outcome, existing literature has been demonstrate,proved
It is well many that bright such a method ratio is predicted effect individually for original series.
Contrast experiment carries out 3 layers of decomposition to P (i) using db3 small echos, obtains low frequency trend sequence A3 and high-frequency fluctuation sequence
D1, D2, D3, as shown in Figure 8.Each sequence is individually predicted using RBF neural afterwards, in order to verify institute of the present invention
The particle group optimizing RBF of proposition validity, the present invention tests fixed RBF parameters and particle cluster algorithm optimization RBF parameters two
The method of kind:
1) preset parameter:It is 20 to choose input layer number, that is, takes the active power data of preceding 20 sampled points to use
In the active power data of prediction thereafter, training set length is 500, and its total prediction step is missed for 50 1 step, 2 steps and 3 steps prediction
It is poor as shown in table 5, predict the outcome and see shown in Fig. 9 (a)-(c), it is seen then that the average MAPE errors of its 1-3 steps are added compared with table 2
131.15%.
Table 5
1 step is predicted | 2 steps are predicted | 3 steps are predicted | |
MAPE | 0.1829 | 0.1927 | 0.2284 |
2) particle group optimizing RBF parameters:The scope for taking input layer number is [10,20], and the length of training set is
[50,2100], population population scale is 50, iteration 30 times, when carrying out the prediction of 3 steps, RBF neural optimum results such as table 6
It is shown, predict the outcome as shown in Figure 8:
Table 6
A3 | D1 | D2 | D3 | |
Input layer number | 15 | 18 | 15 | 17 |
Training set length | 164 | 113 | 129 | 111 |
Its total prediction step is as shown in table 7 for 50 1 step, 2 steps and 3 step predicated errors, predicts the outcome and sees Figure 10 (a)-(c)
It is shown, it is seen then that the average MAPE errors of its 1-3 steps add 18.33% compared with table 2.
Table 7
1 step is predicted | 2 steps are predicted | 3 steps are predicted | |
MAPE | 0.0847 | 0.1029 | 0.1216 |
The general principle and principal character and advantages of the present invention of the present invention has been shown and described above.The technology of the industry
Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the simply explanation described in above-described embodiment and specification is originally
The principle of invention, without departing from the spirit and scope of the present invention, various changes and modifications of the present invention are possible, these changes
Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its
Equivalent thereof.
Claims (6)
1. a kind of prediction of wind turbine active power and error revising method based on neutral net, it is characterised in that including following
Step:
(1) the crude sampling active power time series p={ p (i), i=1,2 ..., N } of wind turbine is read in, wherein N is original
Wind turbine active power sampled point number;P is adjusted to average active power time series p '={ p ' by forecast space requirement
(j), j=1,2 ..., M }, wherein M is the sampled point by the wind turbine average active power sequence after forecast space requirement adjustment
Number, p ' average value isOrder
(2) multi-scale wavelet power spectrum analysis method is used, the harmonic compoment sequence { P in P is extracted1,P2,…,Pk,…,PK, its
Middle K is the number of the harmonic compoment sequence in P, Pk={ Pk(1),Pk(2),…,Pk(M) }, thus P=P1+P2+…+PK+ R, its
Middle R=P-P1-P2-…-PK, it is that the residual sequence after harmonic compoment sequence is rejected in P;
(3) to the harmonic compoment sequence { P in P1,P2,…,Pk,…,PK, particle group optimizing is respectively adopted and overlay error is corrected
BP neural network be predicted, set prediction step as l, then each harmonic compoment sequence { P1,P2,…,Pk,…,PKPrediction
As a result it isWherein
(4) RBF neural that the first-order difference sequence D to residual sequence R is corrected using particle group optimizing and overlay error is entered
Row prediction, obtains predicting the outcome for residual sequence by difference inverse operation, sets prediction step as l, then residual sequence R prediction
As a result it is YR={ YR(1),YR(2),...,YR(l)};
(5) willIt is added with each harmonic compoment sequence, predicting the outcome for residual sequence, obtains the final Y that predicts the outcome,
2. a kind of prediction of wind turbine active power and error revising method based on neutral net according to claim 1,
It is characterized in that:The detailed process of the BP neural network of particle group optimizing employed in the step (3) is:
(1) according to Kolmogorov theorems, 3 layers of BP neural network model are set up, if input layer number is I, hidden layer
Neuron number is H, and output layer neuron number is O;Wherein, H=2*I+1, O=1;
(2) parameter for needing to optimize is determined, including:The input layer number I of the BP neural network and length L of training set,
Also include:One group objects W=(w (1), w (2) ..., w (q)), q=I*H+H*O+H+O, wherein, w (1)~w (I*H) is BP god
Input layer through network is to the link weights of hidden layer neuron, and w (I*H+1)~w (I*H+H*O) is the implicit of BP neural network
Layer is to the link weights of output layer neuron, and w (I*H+H*O+1)~w (I*H+H*O+H) is BP neural network hidden layer neuron
Threshold value, w (I*H+H*O+H+1)~w (I*H+H*O+H+O) be BP neural network output layer neuron threshold value;
(3) population is initializedWherein Q1For the sum of particle, i-th of particle is Xi=(Ii,Wi,Li),
Particle rapidity is Vi=(v_Ii,v_Wi,v_Li), wherein Ii、Wi、LiFor parameter I, W, L, mono- group alternatively solves;
(4) to each particle X in colonyi=(Ii,Wi,Li) determine parameter, construct BP neural network training set input and
Output matrix, wherein for harmonic compoment sequence PkAnd BP neural network input layer number IiInitially set up matrix Z1With
Z2, wherein:
<mrow>
<msub>
<mi>Z</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>*</mo>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>,</mo>
</mrow>
<mrow>
<msub>
<mi>Z</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
</mrow>
</msub>
</mrow>
For neural metwork training collection length L, Z to be optimized1In last LiArrange the input matrix I as training settrain, Z2In
Last LiArrange the output matrix O as training settrain;It regard forecast step-length l as test step-length, Z1In last l row conducts
The input matrix I of test settest, Z2In last l arrange output matrix O as test settest;The BP constructed according to training set
Neutral net, as its fitness value, is made to the error sum of squares of test set analog result with the minimum optimization direction of fitness value
The quality of each particle, record particle X are judged for evaluation criterioniCurrent individual extreme value is Pbest(i) P in colony, is takenbest(i) most
Excellent individual is used as overall extreme value Gbest;
(5) each particle X in colonyi, its position and speed are updated respectively;
<mrow>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mrow>
<mi>g</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>&omega;V</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>+</mo>
<msub>
<mi>c</mi>
<mn>1</mn>
</msub>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
<mo>-</mo>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>c</mi>
<mn>2</mn>
</msub>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>G</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
<mrow>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mrow>
<mi>g</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mrow>
<mi>g</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
</mrow>
In formula:ω is inertia weight, c1、c2For acceleration factor, g is current iteration number of times, and r1、r2To be distributed in [0,1]
Random number;
(6) target function value of each particle now is recalculated, P is updatedbestAnd G (i)best;
(7) judge whether to reach maximum iteration, terminate optimization process if meeting, obtain BP neural network optimal parameter
Ibest、Wbest(wbest(1),wbest(2),...,wbest(q))、Lbest, otherwise return to step (4).
3. a kind of prediction of wind turbine active power and error revising method based on neutral net according to claim 2,
It is characterized in that:The detailed process of overlay error correction method employed in the step (3) is:
(1) first by the BP neural network parameter I of optimizationbest、Wbest(wbest(1),wbest(2),...,wbest(q))、LbestStructure
Make neural metwork training collection Z3With test set Z4And BP neural network link weights and threshold value are initialized, wherein:
<mrow>
<msub>
<mi>Z</mi>
<mn>3</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>2</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>2</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>3</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>*</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
<mrow>
<msub>
<mi>Z</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</msub>
</mrow>
wbest(1)~wbest(I*H) for BP neural network input layer to hidden layer neuron link weights initial value,
wbest(I*H+1)~wbest(I*H+H*O) the initial of weights is linked to output layer neuron for the hidden layer of BP neural network
Value, wbest(I*H+H*O+1)~wbest(I*H+H*O+H) for BP neural network hidden layer neuron threshold value initial value, wbest
(I*H+H*O+H+1)~wbest(I*H+H*O+H+O) for BP neural network output layer neuron threshold value initial value;
(2) defined nucleotide sequence PkIt is c=in sampled point i rate of change | pk(i)-pk(i-1) |, for prediction step l, calculate Z4Finally
The maximum rate of change of l step-lengths is cmax, the l step predictions that just can be iterated afterwards, during prediction, with cmaxIt is used as jth
The output valve of neutral net during step-ahead predictionWith previous predicted value or actual valueMaximum rate of change, ifThenIt is used as the predicted value of jth step-length;If otherwiseThenIfThen
4. a kind of prediction of wind turbine active power and error revising method based on neutral net according to claim 1,
It is characterized in that:The detailed process of particle group optimizing RBF neural employed in the step (4) is:
(1) determine to need Optimal Parameters, including:RBF neural input layer number I and training set length L;
(2) population is initializedWherein Q2For the sum of particle, i-th of particle is Xi=(Ii,Li), grain
Sub- speed isWherein Ii,LiFor parameter I, L, mono- group alternatively solves;
(3) each particle X in colonyi(Ii,Li) parameter that determines, construct the input of RBF neural training set and defeated
Go out matrix, wherein for residual sequence R and RBF neural input layer number IiInitially set up matrix Z5And Z6, its
In:
<mrow>
<msub>
<mi>Z</mi>
<mn>5</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>*</mo>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</msub>
</mrow>
<mrow>
<msub>
<mi>Z</mi>
<mn>6</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
</mrow>
</msub>
</mrow>
For neural metwork training collection length L, Z to be optimized5In last LiArrange the input matrix I as training settrain, Z6In
Last LiArrange the output matrix O as training settrain;It regard forecast step-length l as test step-length, Z5In last l row conducts
The input matrix I of test settest, Z6In last l arrange output matrix O as test settest;The RBF constructed according to training set
Neutral net, as its fitness value, is made to the error sum of squares of test set analog result with the minimum optimization direction of fitness value
The quality of each particle, record particle X are judged for evaluation criterioniCurrent individual extreme value is Pbest(i) P in colony, is takenbest(i) most
Excellent individual is used as overall extreme value Gbest;
(4) each particle X in colonyi, its position and speed are updated respectively;
<mrow>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mrow>
<mi>g</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>&omega;V</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>+</mo>
<msub>
<mi>c</mi>
<mn>1</mn>
</msub>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
<mo>-</mo>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>c</mi>
<mn>2</mn>
</msub>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>G</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
<mrow>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mrow>
<mi>g</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>X</mi>
<mi>i</mi>
<mi>g</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mrow>
<mi>g</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
</mrow>
In formula:ω is inertia weight, c1、c2For acceleration factor, g is current iteration number of times, and r1、r2To be distributed in [0,1]
Random number;
(5) target function value of each particle now is recalculated, P is updatedbestAnd G (i)best;
(6) judge whether to reach maximum iteration, terminate optimization process if meeting, obtain RBF neural optimal parameter
IbestAnd Lbest, otherwise return to step (3).
5. a kind of prediction of wind turbine active power and error revising method based on neutral net according to claim 4,
It is characterized in that:The detailed process of overlay error correction method employed in the step (4) is:
(1) first by the RBF neural parameter I of optimizationbestAnd LbestConstructing neural network training set Z7With test set Z8And just
Beginningization RBF neural links weights and threshold value, wherein:
<mrow>
<msub>
<mi>Z</mi>
<mn>7</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>2</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>2</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>3</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>M</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<msub>
<mi>I</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>*</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
<mrow>
<msub>
<mi>Z</mi>
<mn>8</mn>
</msub>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>-</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>+</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<msub>
<mi>L</mi>
<mrow>
<mi>b</mi>
<mi>e</mi>
<mi>s</mi>
<mi>t</mi>
</mrow>
</msub>
</msub>
</mrow>
(2) defined nucleotide sequence R is c=in sampled point i rate of change | R (i)-R (i-1) |, for prediction step l, calculate Z8Last l
The maximum rate of change of step-length is cmax, the l step predictions that just can be iterated afterwards, during prediction, with cmaxWalked as jth
The output valve Y of neutral net during long predictionR(j) with previous predicted value or actual value YR(j-1) maximum rate of change, if | YR
(j)-YR(j-1)|<cmax, then YR(j) as the predicted value of jth step-length;If otherwise YR(j)>YR(j-1), then YR(j)=YR(j-
1)+cmaxIf, YR(j)<YR(j-1), then YR(j)=YR(j-1)-cmax。
6. a kind of prediction of wind turbine active power and error revising side based on neutral net according to claim 2 or 4
Method, it is characterised in that:Inertia weight ω=0.5, acceleration factor c1=c2=1.49445.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710473103.3A CN107169612A (en) | 2017-06-21 | 2017-06-21 | The prediction of wind turbine active power and error revising method based on neutral net |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710473103.3A CN107169612A (en) | 2017-06-21 | 2017-06-21 | The prediction of wind turbine active power and error revising method based on neutral net |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107169612A true CN107169612A (en) | 2017-09-15 |
Family
ID=59818907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710473103.3A Pending CN107169612A (en) | 2017-06-21 | 2017-06-21 | The prediction of wind turbine active power and error revising method based on neutral net |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169612A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543879A (en) * | 2018-10-22 | 2019-03-29 | 新智数字科技有限公司 | Load forecasting method and device neural network based |
CN109784563A (en) * | 2019-01-18 | 2019-05-21 | 南方电网科学研究院有限责任公司 | Ultra-short-term power prediction method based on virtual anemometer tower technology |
CN112100904A (en) * | 2020-08-12 | 2020-12-18 | 国网江苏省电力有限公司南京供电分公司 | ICOA-BPNN-based distributed photovoltaic power station active power virtual acquisition method |
CN114386718A (en) * | 2022-03-16 | 2022-04-22 | 广州兆和电力技术有限公司 | Wind power plant output power short-time prediction algorithm combined with particle swarm neural network |
CN117331339A (en) * | 2023-12-01 | 2024-01-02 | 南京华视智能科技股份有限公司 | Coating machine die head motor control method and device based on time sequence neural network model |
CN118223977A (en) * | 2024-04-30 | 2024-06-21 | 上海芯郡电子科技有限公司 | Vehicle cooling motor control system based on real-time flow analysis algorithm |
-
2017
- 2017-06-21 CN CN201710473103.3A patent/CN107169612A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543879A (en) * | 2018-10-22 | 2019-03-29 | 新智数字科技有限公司 | Load forecasting method and device neural network based |
CN109784563A (en) * | 2019-01-18 | 2019-05-21 | 南方电网科学研究院有限责任公司 | Ultra-short-term power prediction method based on virtual anemometer tower technology |
CN112100904A (en) * | 2020-08-12 | 2020-12-18 | 国网江苏省电力有限公司南京供电分公司 | ICOA-BPNN-based distributed photovoltaic power station active power virtual acquisition method |
CN112100904B (en) * | 2020-08-12 | 2022-08-23 | 国网江苏省电力有限公司南京供电分公司 | ICOA-BPNN-based distributed photovoltaic power station active power virtual acquisition method |
CN114386718A (en) * | 2022-03-16 | 2022-04-22 | 广州兆和电力技术有限公司 | Wind power plant output power short-time prediction algorithm combined with particle swarm neural network |
CN117331339A (en) * | 2023-12-01 | 2024-01-02 | 南京华视智能科技股份有限公司 | Coating machine die head motor control method and device based on time sequence neural network model |
CN117331339B (en) * | 2023-12-01 | 2024-02-06 | 南京华视智能科技股份有限公司 | Coating machine die head motor control method and device based on time sequence neural network model |
CN118223977A (en) * | 2024-04-30 | 2024-06-21 | 上海芯郡电子科技有限公司 | Vehicle cooling motor control system based on real-time flow analysis algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107169612A (en) | The prediction of wind turbine active power and error revising method based on neutral net | |
CN107704953A (en) | The short-term wind-electricity power probability density Forecasting Methodology of EWT quantile estimate forests | |
CN101729315B (en) | Network flow-predicting method and device based on wavelet package decomposition and fuzzy neural network | |
CN103729550B (en) | Multiple-model integration Flood Forecasting Method based on propagation time cluster analysis | |
CN103023065B (en) | Wind power short-term power prediction method based on relative error entropy evaluation method | |
CN102945507B (en) | Based on distributing wind energy turbine set Optimizing Site Selection method and the device of Fuzzy Level Analytic Approach | |
CN107578124A (en) | The Short-Term Load Forecasting Method of GRU neutral nets is improved based on multilayer | |
CN107301475A (en) | Load forecast optimization method based on continuous power analysis of spectrum | |
CN106295798A (en) | Empirical mode decomposition and Elman neural network ensemble wind-powered electricity generation Forecasting Methodology | |
CN102626557B (en) | Molecular distillation process parameter optimizing method based on GA-BP (Genetic Algorithm-Back Propagation) algorithm | |
CN107885951A (en) | A kind of Time series hydrological forecasting method based on built-up pattern | |
CN107609671A (en) | A kind of Short-Term Load Forecasting Method based on composite factor evaluation model | |
CN106197999A (en) | A kind of planetary gear method for diagnosing faults | |
CN107292446A (en) | A kind of mixing wind speed forecasting method based on consideration component relevance wavelet decomposition | |
CN106295899A (en) | Based on genetic algorithm and the wind power probability density Forecasting Methodology supporting vector quantile estimate | |
CN102509026A (en) | Comprehensive short-term output power forecasting model for wind farm based on maximum information entropy theory | |
CN114676822B (en) | Multi-attribute fusion air quality forecasting method based on deep learning | |
CN110490409B (en) | DNN-based low-voltage transformer area line loss rate benchmarking value setting method | |
CN107203827A (en) | A kind of wind turbine forecasting wind speed optimization method based on multiscale analysis | |
CN107832881A (en) | Wind power prediction error evaluation method considering load level and wind speed segmentation | |
CN109904878A (en) | A kind of windy electric field electricity-generating timing simulation scenario building method | |
CN106897794A (en) | A kind of wind speed forecasting method based on complete overall experience mode decomposition and extreme learning machine | |
CN103530700B (en) | Urban distribution network saturation loading Comprehensive Prediction Method | |
WO2023245399A1 (en) | Rice production potential simulation method based on land system and climate change coupling | |
CN106845705A (en) | The Echo State Networks load forecasting model of subway power supply load prediction system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170915 |