CN107301475A - Load forecast optimization method based on continuous power analysis of spectrum - Google Patents
Load forecast optimization method based on continuous power analysis of spectrum Download PDFInfo
- Publication number
- CN107301475A CN107301475A CN201710477986.5A CN201710477986A CN107301475A CN 107301475 A CN107301475 A CN 107301475A CN 201710477986 A CN201710477986 A CN 201710477986A CN 107301475 A CN107301475 A CN 107301475A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- sequence
- mtr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000005457 optimization Methods 0.000 title claims abstract description 47
- 238000001228 spectrum Methods 0.000 title claims abstract description 23
- 238000004458 analytical method Methods 0.000 title abstract description 4
- 238000013528 artificial neural network Methods 0.000 claims abstract description 98
- 239000002245 particle Substances 0.000 claims abstract description 78
- 230000000737 periodic effect Effects 0.000 claims abstract description 53
- 238000010183 spectrum analysis Methods 0.000 claims abstract description 22
- 238000012360 testing method Methods 0.000 claims description 59
- 238000012549 training Methods 0.000 claims description 56
- 210000002569 neuron Anatomy 0.000 claims description 50
- 239000011159 matrix material Substances 0.000 claims description 30
- 238000005070 sampling Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 13
- 238000003062 neural network model Methods 0.000 claims description 12
- 230000001133 acceleration Effects 0.000 claims description 8
- 230000008901 benefit Effects 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 6
- 238000004088 simulation Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 2
- 230000001537 neural effect Effects 0.000 abstract description 2
- 238000002474 experimental method Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 6
- 230000000052 comparative effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 4
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 4
- 230000002349 favourable effect Effects 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000013439 planning Methods 0.000 description 3
- 101000579484 Homo sapiens Period circadian protein homolog 1 Proteins 0.000 description 2
- 101001073216 Homo sapiens Period circadian protein homolog 2 Proteins 0.000 description 2
- 101001126582 Homo sapiens Post-GPI attachment to proteins factor 3 Proteins 0.000 description 2
- 102100028293 Period circadian protein homolog 1 Human genes 0.000 description 2
- 102100035787 Period circadian protein homolog 2 Human genes 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000013277 forecasting method Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000010248 power generation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003245 coal Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009916 joint effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention discloses a kind of load forecast optimization method based on continuous power analysis of spectrum, using continuous power spectral analysis method, extract the harmonic compoment sequence implied in electric load time series and isolated residual sequence, harmonic compoment sequence is predicted using the BP neural network optimized based on particle cluster algorithm, predicting the outcome for each harmonic compoment sequence is obtained;The RBF neural optimized using particle cluster algorithm is predicted to the first-order difference sequence of residual sequence, obtain predicting the outcome for residual sequence by difference inverse operation, finally by the average value of average power Load Time Series and each harmonic compoment sequence predict the outcome and predicting the outcome for residual sequence is added acquisition and finally predicted the outcome.The present invention is directed to the periodic characteristics of Power system load data, and Forecast of electric load precision can be greatly improved by setting up forecast model.
Description
Technical Field
The invention belongs to the technical field of power systems, and particularly relates to a power load prediction optimization method based on continuous power spectrum analysis.
Background
The load of the power system refers to the sum of power consumed by all the electric equipment in the system, and is also called as the comprehensive electric load of the power system. The combined power load plus the losses in the power grid and the service power of the power plant is the total power that all generators in the system should generate, also called the power generation load of the power system. The power load is an important factor influencing the safe and stable operation of the system. The power load prediction refers to the process of estimating the future power load development in advance by analyzing and researching the power load historical records and comprehensively considering various factors influencing the power load change, such as social development planning, economic conditions, meteorological change factors, holidays and the like. The power load prediction is the basis of planning, scheduling and power utilization of the power system. The method improves the technical level of power load prediction, is favorable for making reasonable power supply construction plan, reasonably arranging the operation mode of a power grid and the maintenance plan of a unit, is favorable for saving coal, saving oil and reducing the power generation cost, is favorable for planning power utilization management, and is favorable for improving the economic benefit and the social benefit of a power system. Therefore, power load prediction is one of important contents for realizing modernization of power system management. Due to the influence of factors such as weather conditions, social activities of people and the like, the power load data has a large number of random and nonlinear relations, and the factors influencing the time sequence of the power load can be divided into internal random factors and external random factors, wherein the external factors comprise weather, society, economy and the like, the internal factors are the results influenced by the nonlinear factors inside the power system, the power load is the result of the joint action of the internal and external random influence factors of the system, the inaccurate prediction reason is not only the influence of the external random factors, but more importantly is determined by the internal dynamic characteristics of the system.
Therefore, various forecasting methods emerge, and the improvement of the algorithm is expected to improve the forecasting precision of the power load from general statistical models such as an ARIMA time series model, a gray model and the like to various intelligent models such as a neural network model, a support vector machine model and the like, but the most fundamental is the learning and generalization performance of the used forecasting method on data. The power load is influenced by human production and life and has obvious regularity, but a large amount of randomness exists in the regularity, and the learning and generalization capability of the model is influenced.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a power load prediction optimization method based on continuous power spectrum analysis, a significant periodic sequence implied in an original power load time sequence is extracted through the continuous power spectrum analysis, and a residual sequence is obtained through separation.
The technical purpose is achieved, the technical effect is achieved, and the invention is realized through the following technical scheme:
a power load prediction optimization method based on continuous power spectrum analysis comprises
Reading in an original sampling power load time sequence, converting the original sampling power load time sequence into an average power load time sequence according to a forecast interval requirement, and then calculating a distance sequence of the average power load time sequence;
extracting a significant periodic sequence implied in a distance sequence of an average power load time sequence by adopting a continuous power spectrum analysis method, and separating to obtain a residual sequence;
predicting the significant periodic sequences by adopting a BP neural network optimized by a particle swarm algorithm to obtain the prediction result of each significant periodic sequence;
predicting a first-order difference sequence of the residual sequence by using a particle swarm optimization RBF neural network, and then obtaining a prediction result of the residual sequence through a difference inverse operation;
and adding the average value of the average power load time sequence, the prediction result of each significant period sequence and the prediction result of the residual sequence to obtain a final prediction result.
Further, the original sampling power load time sequence is p ═ { p (i) ═ 1, 2.., N }, where N is the number of original power load sampling points;
the average power load time sequence is p ' ═ { p ' (j), j is 1,2, as, M }, wherein M is the number of sampling points of the average power load sequence after conversion according to the forecast interval requirement, and the average value of p ' is p ' ═ p ' (j), j is 1,2Order to
The distance sequence of the average power load time sequence is
Further, the significant periodic sequence is { P }1,P2,…,Pk,…,PKWhere K is the number of significant periodic sequences implied in P, Pk={Pk(1),Pk(2),…,Pk(M) }, in which Pk(1),Pk(2),…,Pk(M) are respectively a significant periodic sequence PkA value of (d); the residual sequence is R ═ P-P1-P2-…-PK。
Further, the extracting, by using a continuous power spectrum analysis method, a significant period sequence implicit in a distance sequence of an average power load time sequence specifically includes: and analyzing the significant periodic bands of the average power load time sequence from the flat sequence by using a continuous power spectrum method, and extracting the time sequences corresponding to the significant periodic bands by using a frequency domain filtering method of fast Fourier transform so as to obtain the significant periodic sequences.
Further, the specific process of predicting the significant periodic sequence by using the BP neural network optimized based on the particle swarm optimization is as follows:
(1) establishing a 3-layer BP neural network model according to Kolmogorov theorem, setting the number of neurons in an input layer as I,
the number of neurons in the hidden layer is H, and the number of neurons in the output layer is O; wherein, H2I +1, O1;
(2) determining parameters needing optimization, including: number I of neurons in input layer of BP neural network and length of training set
L, further comprising: w (1), W (2),. once, W (q)), q ═ H + O, where W (1) -W (I × H) are the connection weights from the input layer to the hidden layer neurons of the BP neural network, W (I × H +1) -W (I × H + O) are the connection weights from the hidden layer to the output layer neurons of the BP neural network, W (I × H + O + H) are the thresholds from the hidden layer to the output layer neurons of the BP neural network, and W (I × H + O + H +1) -W (I × H + O) are the thresholds from the hidden layer neurons of the BP neural network;
(3) initializing population X ═ X1,X2,...,XQ1) Wherein Q is1Is the total number of particles, the ith particle is Xi=(Ii,Wi,Li) Particle velocity of Vi=(vIi,vWi,vLi) In which Ii、Wi、LiA set of alternative solutions for parameter I, W, L;
(4) according to each particle X in the populationi=(Ii,Wi,Li) Constructing input and output matrices of a training set of the BP neural network for the determined parameters, wherein the significant period sequence P iskAnd BP neural network input layer neuron number IiFirst, a matrix Z is established1And Z2Wherein:
training set length L, Z for neural network to be optimized1Middle last LiInput matrix I with columns as training settrain,Z2Middle last LiOutput matrix O with columns as training settrain(ii) a Taking the forecast step length l as the test step length, Z1The last column in the test set is used as an input matrix I of the test settest,Z2The last column in the test set is used as the output matrix O of the test settest(ii) a Taking the sum of the squares of errors of the simulation results of the BP neural network constructed according to the training set as the fitness value of the test set, taking the minimum fitness value as the optimization direction as the evaluation standard to judge the advantages and disadvantages of each particle, and recording the particle XiCurrent individual extremum is Pbest(i) Taking P in the populationbest(i) Optimal individual as global extremum Gbest;
(5) Each particle X in the populationiUpdating the position and the speed of the mobile terminal respectively;
in the formula: omega is the inertial weight, c1、c2Is an acceleration factor, g is the current iteration number, r1、r2Is distributed in [0,1 ]]The random number of (2);
(6) recalculating the objective function value of each particle at the moment, and updating Pbest(i) And Gbest;
(7) Judging whether the maximum iteration times is reached, if so, ending the optimization process, and obtaining a parameter optimal value (I) obtained by the particle swarm optimizationbest,Wbest(wbest(1),wbest(2),...,wbest(q)),Lbest) Otherwise, returning to the step (4);
(8) according to Ibest、Wbest(wbest(1),wbest(2),...,wbest(q))、LbestConstructing BP neural network training set Z3And test set Z4And initializing a BP neural network connection weight and a threshold, wherein:
wbest(1)~wbest(I × H) is from the input layer to the hidden layer of the BP neural networkInitial value of the connection weight, w, of neurons in layersbest(I*H+1)~wbest(I H + H O) is the initial value of the link weights from hidden layer to output layer neurons of the BP neural network, wbest(I*H+H*O+1)~wbest(I H + H O + H) is the initial value of the threshold of hidden layer neuron of BP neural network, wbest(I*H+H*O+H+1)~wbestAnd (I H + H O + H + O) is an initial value of the threshold value of the neuron of the output layer of the BP neural network, so that a BP neural network model is established, iterative prediction is carried out in one step after training, and a corresponding prediction result is obtained.
Further, the method for predicting the first difference sequence of the residual sequence by using the particle swarm optimization RBF neural network comprises the following specific steps:
(1) determining parameters to be optimized, including: the number I of neurons in an input layer of the RBF neural network and the length L of a training set are calculated;
(2) initializing a populationWherein Q2Is the total number of particles, the ith particle is Xi=(Ii,Li) The particle velocity isWherein Ii,LiA set of alternative solutions for parameter I, L;
(3) according to each particle in the populationConstructing input and output matrixes of a training set of the RBF neural network according to the determined parameters, wherein the input and output matrixes are used for the residual sequence R and the number I of neurons in the input layer of the RBF neural networkiFirst, a matrix Z is established5And Z6Wherein:
training set length L, Z for neural network to be optimized5Middle last LiInput matrix I with columns as training settrain,Z6Middle last LiOutput matrix O with columns as training settrain(ii) a Taking the forecast step length l as the test step length, Z5The last column in the test set is used as an input matrix I of the test settest,Z6The last column in the test set is used as the output matrix O of the test settest(ii) a The sum of the squares of errors of the simulation results of the test set by the RBF neural network constructed according to the training set is used as the fitness value of the test set, the minimum fitness value is used as the optimization direction to be used as the evaluation standard to judge the quality of each particle, and the particle X is recordediCurrent individual extremum is Pbest(i) Taking P in the populationbest(i) Optimal individual as global extremum Gbest;
(4) Each particle X in the populationiUpdating the position and the speed of the mobile terminal respectively;
in the formula: omega is the inertial weight, c1、c2For the acceleration factor, g is the current iteration number, and r1、r2Is distributed in [0,1 ]]The random number of (2);
(5) recalculating the objective function value of each particle at the moment, and updating Pbest(i) And Gbest;
(6) Judging whether the maximum iteration times is reached, if so, ending the optimization process to obtain parameters obtained by particle swarm optimizationThe optimal value is (I)best,Lbest) Otherwise, returning to the step (3).
(7) According to IbestAnd LbestConstructing RBF neural network training set Z7And test set Z8Wherein:
and establishing an RBF neural network model, training, performing iterative prediction in one step, and obtaining a corresponding prediction result.
Further, the inertia weight ω is 0.5, and the acceleration factor c is1=c2=1.49445。
The invention has the beneficial effects that:
(1) the electric power load obvious periodic sequence extracted by continuous power spectrum analysis can be predicted with high precision due to strong regularity, and the obvious periodic sequence has a large proportion in the original electric power load sequence, so that a foundation for high-precision prediction is laid; the residual sequence without the periodic signal is stable due to the fact that the proportion of the residual sequence in the whole power load sequence is not large, and the prediction error of the residual sequence is relatively limited due to the fact that one-time difference operation is conducted in the processing process, so that the method for decomposing the power load sequence into a plurality of remarkable periodic sequences and a single residual sequence through continuous power spectrum analysis and further predicting each remarkable periodic sequence and each residual sequence can greatly improve the whole prediction effect.
(2) Aiming at the influence of inconsistent neural network structure selection on the prediction performance, the method provided by the invention respectively adopts the BP neural network and the RBF neural network according to the characteristics of the obvious periodic sequence and the residual sequence separated from the power load sequence, and optimizes the structural parameters of the neural network and the training set scale by adopting a particle swarm algorithm, so that the generalization performance of the neural network is obviously improved, and the prediction precision is finally improved.
Drawings
FIG. 1 is a flow chart of a method for optimizing power load prediction based on continuous power spectrum analysis according to the present invention;
FIG. 2 is a raw power load sequence chart;
FIG. 3 is a graph of the results of a continuous power spectrum analysis of a flat sequence of an average power load time series;
FIG. 4 is a graph of a significant periodic sequence extracted from a flat sequence of an average power load time series and a separated residual sequence;
FIG. 5(a) is a graph of one-step predicted results of the method of the present invention;
FIG. 5(b) is a graph of two predicted results of the method of the present invention;
FIG. 5(c) is a graph of the three-step prediction results of the method of the present invention;
fig. 6(a) is a graph of one-step prediction results of establishing a particle swarm optimization RBF neural network for an original power load sequence;
FIG. 6(b) is a graph of two-step prediction results of establishing a particle swarm optimization RBF neural network for an original power load sequence;
FIG. 6(c) is a graph of three-step prediction results of establishing a particle swarm optimization RBF neural network for an original power load sequence
FIG. 7(a) is a diagram of one-step prediction results of an ARIMA time series model established for an original power load sequence;
FIG. 7(b) is a diagram of two-step prediction results of an ARIMA time series model established for an original power load sequence;
fig. 7(c) is a diagram of the three-step prediction results of the ARIMA time series model established for the original power load sequence.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
The invention discloses a power load prediction optimization method based on continuous power spectrum analysis, which adopts a continuous power spectrum analysis method to extract a significant period sequence implicit in a power load time sequence and separate the significant period sequence to obtain a residual sequence, adopts a BP neural network optimized based on a particle swarm algorithm to predict the significant period sequence to obtain a prediction result of each significant period sequence, adopts a RBF neural network optimized by the particle swarm algorithm to predict a first-order difference sequence of the residual sequence, obtains a prediction result of the residual sequence through differential inversion operation, and finally adds an average value of an average power load time sequence with the prediction results of each significant period sequence and the prediction result of the residual sequence to obtain a final prediction result.
As shown in fig. 1, specifically, the method comprises the following steps:
s1, reading in the original sampling power load time sequence, converting the time sequence into an average power load time sequence according to the forecast interval requirement, and then calculating the distance sequence of the average power load time sequence;
the time sequence of the original sampling power load is p ═ { p (i) ═ 1, 2.., N }, wherein N is the number of original power load sampling points;
the average power load sequence is p '═ { p' (j), j ═ 1,2,. multidata, M }, where M is at forecast intervalsThe average value of the number of sampling points of the average power load sequence after the conversion is required, and the average value of p' isOrder to
The distance sequence of the average power load time sequence is
S2, extracting a significant periodic sequence implied in a distance sequence of an average power load time sequence by adopting a continuous power spectrum analysis method, and separating to obtain a residual sequence;
the significant period sequence is { P }1,P2,…,Pk,…,PKWhere K is the number of significant periodic sequences implied in P, Pk={Pk(1),Pk(2),…,Pk(M) }, in which Pk(1),Pk(2),…,Pk(M) are respectively a significant periodic sequence PkA value of (d); the residual sequence is R ═ P-P1-P2-…-PK(ii) a Thus, P ═ P1+P2+…+PK+R。
The above extraction process is specifically as follows:
assume a discrete time sequence is xtWhere t is 0, 1., N-1, N sampling points, time intervalstApplying a continuous power spectrum estimation method, analyzing the significant periodic bands of the discrete time sequence, and extracting the time sequence corresponding to each significant periodic band by using a frequency domain filtering method of Fourier transform (FFT), wherein the method specifically comprises the following steps:
(1) determining continuous power spectral values
First calculate xtContinuous power spectrum rough spectrum estimation value:
wherein:the rough spectrum estimation value of the continuous power spectrum corresponding to h wave number is obtained, h is the wave number, h is 0,1, … m, m is N/8, and r (tau) is the time sequence xtThe lag time length of (c) is the autocorrelation coefficient of τ:
wherein,and s are respectively discrete time series xtMean and standard deviation of (d).
To eliminate small fluctuations in the coarse spectral estimate, hanning smoothing is performed on equation (1), followed by successive power spectral values (i.e., shown in solid lines in fig. 3) as:
S0continuous power spectrum values corresponding to 0 wave number; shIs a continuous power spectrum value corresponding to h wave number, SmThe continuous power spectrum value corresponding to m wave numbers.
(2) Determining an analysis period
The period corresponding to h wave number is:(i.e., the period point corresponding to the abscissa in fig. 3), considering that m is N8 in the embodiment of the present invention, then
(3) Continuous power spectrum confidence test
And (4) comparing the continuous power spectrum value obtained by the formula (3) with the red noise spectrum value, and judging the significance of the continuous power spectrum value.
Assuming that the continuous power spectrum value obtained by the formula (3) is a certain aperiodic process spectrum value, and the continuous power spectrum value S corresponding to h wave numberhAnd average red noise spectrum valueThe ratio of which conforms to the chi removed by the degree of freedom v thereof2Distribution:
wherein the average red noise spectrum valueComprises the following steps:
in the formula,is the average value of the continuous power spectrum values of all wave numbers calculated in the formula (3), and r (1) is xtThe lag time length is the autocorrelation coefficient of 1, and the degree of freedom v is:
the selection of the embodiment of the invention is under 0.05 significance levelWhen the spectral value of the wave number is significantAnd, significantly, the period fluctuation,which is the dashed check line in fig. 3.
(4) Extracting time series corresponding to periodic bands
Determination of the periodic band: and (3) taking the first periodic points lower than the red noise detection line on the left and right sides of the significant continuous power spectrum value selected in the step (3) to form a periodic band, wherein the periodic band is a significant periodic band, the point lower than the red noise detection line on the left side in the graph 3 is defined as the upper boundary of the periodic band, and the point lower than the red noise detection line on the right side in the graph 3 is defined as the lower boundary of the periodic band.
Extracting a time sequence corresponding to the periodic band: the embodiment of the invention adopts a geoscience data processing program library WHIGG F90LIB (WFL) developed by China academy of sciences measurement and geophysical research institute, and extracts a time sequence corresponding to a periodic band by applying a frequency domain filtering subprogram of Fourier transform FFT of the software, wherein the subprogram comprises the following steps:
CALL FFT_FILTER(N,X,DT,PER1,PER2,FIL_METHOD,XOUT)
wherein N is the total number of sampling points, and X is XtDT is the sampling time interval t, PER1 is the lower bound of the extraction period BAND, PER2 is the upper bound of the extraction period BAND, FIL _ METHOD is the filter type, here "BAND" is taken and referred to as the BAND-shaped period, XOUT is the time sequence corresponding to the significant extracted period BAND.
S3, adopting a BP neural network based on particle swarm optimization to predict the significant periodic sequence, wherein the specific process is as follows;
(1) according to Kolmogorov theorem, a 3-layer BP neural network can realize approximation of any nonlinear function, so that a 3-layer BP neural network model is established in the embodiment of the invention, the number of neurons in an input layer is set as I, the number of neurons in an implicit layer is set as H, and the number of neurons in an output layer is set as O; wherein, H2I +1, O1;
(2) determining parameters needing optimization, including: the number I of neurons in the input layer of the BP neural network and the length L of the training set further comprise: w (1), W (2),. once, W (q)), q ═ H + O, where W (1) -W (I × H) are the connection weights from the input layer to the hidden layer neurons of the BP neural network, W (I × H +1) -W (I × H + O) are the connection weights from the hidden layer to the output layer neurons of the BP neural network, W (I × H + O + H) are the thresholds from the hidden layer to the output layer neurons of the BP neural network, and W (I × H + O + H +1) -W (I × H + O) are the thresholds from the hidden layer neurons of the BP neural network;
(3) initializing population X ═ X1,X2,...,XQ1) Wherein Q is1Is the total number of particles, the ith particle is Xi=(Ii,Wi,Li) Particle velocity of Vi=(vIi,vWi,vLi) In which Ii、Wi、LiA set of alternative solutions for parameter I, W, L;
(4) according to each particle X in the populationi=(Ii,Wi,Li) Constructing input and output matrices of a training set of the BP neural network for the determined parameters, wherein the significant period sequence P iskAnd BP neural network input layer neuron number IiFirst, a matrix Z is established1And Z2Wherein:
training set length L, Z for neural network to be optimized1Middle last LiInput matrix I with columns as training settrain,Z2Middle last LiOutput matrix O with columns as training settrain(ii) a Taking the forecast step length l as the test step length, Z1The last column in the test set is used as an input matrix I of the test settest,Z2The last column in the test set is used as the output matrix O of the test settest(ii) a Taking the sum of the squares of errors of the simulation results of the BP neural network constructed according to the training set as the fitness value of the test set, taking the minimum fitness value as the optimization direction as the evaluation standard to judge the advantages and disadvantages of each particle, and recording the particle XiCurrent individual extremum is Pbest(i) Taking P in the populationbest(i) Optimal individual as global extremum Gbest;
(5) Each particle X in the populationiUpdating the position and the speed of the mobile terminal respectively;
in the formula: omega is the inertial weight, c1、c2Is an acceleration factor, g is the current iteration number, r1、r2Is distributed in [0,1 ]]The random number of (2);
(6) recalculating the objective function value of each particle at the moment, and updating Pbest(i) And Gbest;
(7) Judging whether the maximum iteration times is reached, if so, ending the optimization process, and obtaining a parameter optimal value (I) obtained by the particle swarm optimizationbest,Wbest(wbest(1),wbest(2),...,wbest(q)),Lbest) Otherwise, returning to the step (4);
(8) according to Ibest、Wbest(wbest(1),wbest(2),...,wbest(q))、LbestConstructing BP neural network training set Z3And test set Z4And initializing a BP neural network connection weight and a threshold, wherein:
wbest(1)~wbest(I × H) is the initial value of the link weights from the input layer to the hidden layer neurons of the BP neural network, wbest(I*H+1)~wbest(I H + H O) is the initial value of the link weights from hidden layer to output layer neurons of the BP neural network, wbest(I*H+H*O+1)~wbest(I H + H O + H) is the initial value of the threshold of hidden layer neuron of BP neural network, wbest(I*H+H*O+H+1)~wbestAnd (I H + H O + H + O) is an initial value of the threshold value of the neuron of the output layer of the BP neural network, so that a BP neural network model is established, iterative prediction is carried out in one step after training, and a corresponding prediction result is obtained.
S4, predicting the first-order difference sequence of the residual sequence by adopting a particle swarm optimization RBF neural network, and the specific process is as follows:
(1) determining parameters to be optimized, including: the number I of neurons in an input layer of the RBF neural network and the length L of a training set are calculated;
(2) initializing a populationWherein Q2Is the total number of particles, the ith particle is Xi=(Ii,Li) The particle velocity isWherein Ii,LiA set of alternative solutions for parameter I, L;
(3) according to each particle X in the populationi(Ii,Li) Constructing input and output matrices of a training set of RBF neural networks for the determined parameters, wherein the residual sequences R and RB areNumber of neurons in input layer of F neural network IiFirst, a matrix Z is established5And Z6Wherein:
training set length L, Z for neural network to be optimized5Middle last LiInput matrix I with columns as training settrain,Z6Middle last LiOutput matrix O with columns as training settrain(ii) a Taking the forecast step length l as the test step length, Z5The last column in the test set is used as an input matrix I of the test settest,Z6The last column in the test set is used as the output matrix O of the test settest(ii) a The sum of the squares of errors of the simulation results of the test set by the RBF neural network constructed according to the training set is used as the fitness value of the test set, the minimum fitness value is used as the optimization direction to be used as the evaluation standard to judge the quality of each particle, and the particle X is recordediCurrent individual extremum is Pbest(i) Taking P in the populationbest(i) Optimal individual as global extremum Gbest;
(4) Each particle X in the populationiUpdating the position and the speed of the mobile terminal respectively;
in the formula: omega is the inertial weight, c1、c2For the acceleration factor, g is the current iteration number, and r1、r2Is distributed in [0,1 ]]The random number of (2);
(5) recalculating the objective function value of each particle at the moment, and updating Pbest(i) And Gbest;
(6) Judging whether the maximum iteration times is reached, if so, ending the optimization process, and obtaining a parameter optimal value (I) obtained by the particle swarm optimizationbest,Lbest) Otherwise, returning to the step (3).
(7) According to IbestAnd LbestConstructing RBF neural network training set Z7And test set Z8Wherein:
and establishing an RBF neural network model, training, performing iterative prediction in one step, and obtaining a corresponding prediction result.
And S5, adding the average value of the average power load time series, the prediction result of each significant period series and the prediction result of the residual series to obtain a final prediction result.
Example two
According to steps S1-S5 in the first embodiment, an original power load time series collected by a power grid at an hour level is taken, and referring to fig. 2 in particular, since the purpose of the present embodiment is short-term forecasting of the hour level, the original power load data can be used directly without any adjustment, i.e., p' (i) ═ p (i) ═ 1, 2. In this embodiment, the point 1680 before p '(i) is taken as training data, the 50 points after p' (i) are predicted, and the effectiveness of the algorithm is examined by taking the relative percentage error MAPE as an index, that is:
wherein, y (i) and p' (i) are respectively a predicted value and a sampled value of the power load, and l is a prediction step length.
Fig. 3 shows the continuous power spectrum analysis result of the pitch sequence P of the average power load time sequence, and it is found that the grid power load sequence has 2 significant period bands with extreme points of 12 and 24 hours, and the first period points lower than the detection line on the left and right sides of the extreme points are taken to form a period band, the period band is a significant period band, the detection line is the dotted line in fig. 3, in this embodiment, the 2 significant period bands are [21.8,26.7 ] respectively]And [11.4, 12.6]Extracting time sequences corresponding to the 2 periodic bands by adopting a frequency domain filtering method of Fourier transform FFT (fast Fourier transform), wherein the time sequences are respectively P1、P2And obtaining a corresponding residual sequence R, whereby P ═ P1+P2+ R, see FIG. 4. Therefore, the regularity of the 2 significant periodic sequences is extremely strong, and high-precision prediction can be realized; on the other hand, although the prediction error for the residual is unavoidable, it is calculated that the energy (variance) of the residual R accounts for 28.56% of the energy (variance) of P, and the reduction is significant, and therefore, the prediction error for the residual is much smaller than the error of directly predicting P.
Although the neural network has strong nonlinear fitting capability and fast learning capability, how to select a proper neural network model and determine the structure, training set and test set of the neural network still mainly depends on manual experience or trial and error, and the universality is poor. Through analysis of the extracted significant periodic sequence, the extracted significant periodic sequence is found to have obvious periodic variation characteristics and smooth sequence, but the amplitude and phase of the sequence slightly change along with time, so that the significant periodic sequence is more suitable for a BP neural network with strong fault-tolerant capability, and the residual sequence fluctuates around the 0 axis after first-order difference and is more suitable for an RBF neural network1、P2BP nerve optimized by particle swarm optimizationAnd (3) a network model is adopted, and an RBF neural network optimized based on a particle swarm optimization is adopted for the residual sequence R.
To P1、P2Adopting a BP neural network model optimized based on a particle swarm optimization, and taking the range of the number of neurons in an input layer as [5,14 ]]The length of the training set is [50,1650]]The range of the weight and the threshold of the neural network is [ -3,3]The population size of the population was 50 and the iterations were 30. For R, an RBF neural network optimized based on a particle swarm optimization is adopted, and the range of the number of input layer neurons is taken as [5,20 ]]The length of the training set is [50,1650]]The population size of the population was 50 and the iterations were 30. Table 1 shows that the significant period sequence P is obtained when 3-step prediction is performed1、P2And the optimization results of two parameters of the number I of neurons in the input layer and the length L of the training set of the residual error R for P1、P2The optimization results of the established BP neural network weight and the threshold are not listed one by one due to excessive parameters.
TABLE 1
In this example, 1-step, 2-step, and 3-step prediction experiments were performed with a total prediction step size of 50, and the prediction results are shown in fig. 5(a) - (c), and table 2 shows the statistics of prediction errors. As can be seen, the overall prediction precision is reduced with the increase of the prediction step length, but the overall error is less than 5%, and the prediction result is satisfactory.
TABLE 2
1 step prediction | 2-step prediction | 3-step prediction | |
MAPE | 0.0399 | 0.0436 | 0.0434 |
Comparative experiment 1
In order to verify the influence of the optimization strategy provided by the invention on the experimental result, a difference operation is directly performed on the original power load sequence p' in a comparison experiment 1, then a particle swarm optimization RBF neural network is established, the range of the number of neurons in an input layer is taken as [5,25], the length of a training set is taken as [50,1650], the particle swarm size is taken as 50, and the iteration is performed for 30 times. Table 3 shows the results of the RBF neural network parameter optimization established for the original power load sequence p' during the 3-step prediction.
TABLE 3
Similarly, in comparative experiment 1, 1-step, 2-step and 3-step prediction experiments with a total prediction step size of 50 were performed, the prediction results are shown in fig. 6(a) - (c), table 4 is the statistics of prediction errors, and it can be seen from comparative table 2 that the average error of the prediction in steps 1-3 is increased by 60.44% compared with that in table 2.
TABLE 4
1 step prediction | 2-step prediction | 3-step prediction | |
MAPE | 0.0395 | 0.0708 | 0.0933 |
If the difference operation is not performed on p', the number I of neurons in the input layer of the RBF neural network and the length L of the training set are randomly selected, and the final prediction error difference is very large, the embodiment of the invention selects two different groups of I and L to explain the influence of the final prediction error, as shown in table 5.
TABLE 5
The average error of the 1-3 predicted values of the comparative experiments of two different parameters is increased by 47.68% and 170.61% compared with that of the table 2. The poor effect of the group on the comparison experiment shows that the selection of the neural network parameters has great influence on the learning ability and generalization of the neural network, so that the effect of directly adopting the neural network for modeling is not good.
Comparative experiment 2
A differential autoregressive moving Average Model (ARIMA) Model is established for the original power load sequence. Selecting 100 sampling data points before the prediction point, determining the structure of the ARIMA model by an AIC criterion order-fixing method, similarly, carrying out 1-step, 2-step and 3-step prediction experiments with a total prediction step length of 50 in a comparison experiment 2, wherein the prediction results are shown in fig. 7(a) - (c), table 6 is prediction error statistics, and the comparison table 2 shows that the average error of the prediction in the 1-3 steps is increased by 136.25% compared with that in the table 2.
TABLE 6
1 step prediction | 2-step prediction | 3-step prediction | |
MAPE | 0.0169 | 0.1486 | 0.1343 |
In summary, the following steps:
the power load significant periodic sequence extracted by continuous power spectrum analysis has strong regularity, so that prediction can be carried out with high precision, and the significant periodic sequence has a large proportion in the original sequence, so that a foundation for high-precision prediction is laid; the residual sequence after the significant periodic sequence is removed has relatively limited prediction error because the specific gravity of the original sequence is not large. The method for decomposing the power load sequence into a plurality of significant period sequences and a single residual sequence through continuous power spectrum analysis, and further predicting each significant period sequence and each residual sequence respectively can greatly improve the overall prediction effect.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (7)
1. A power load prediction optimization method based on continuous power spectrum analysis is characterized by comprising the following steps: comprises that
Reading in an original sampling power load time sequence, converting the original sampling power load time sequence into an average power load time sequence according to a forecast interval requirement, and then calculating a distance sequence of the average power load time sequence;
extracting a significant periodic sequence implied in a distance sequence of an average power load time sequence by adopting a continuous power spectrum analysis method, and separating to obtain a residual sequence;
predicting the significant periodic sequences by adopting a BP neural network optimized by a particle swarm algorithm to obtain the prediction result of each significant periodic sequence;
predicting a first-order difference sequence of the residual sequence by using a particle swarm optimization RBF neural network, and then obtaining a prediction result of the residual sequence through a difference inverse operation;
and adding the average value of the average power load time sequence, the prediction result of each significant period sequence and the prediction result of the residual sequence to obtain a final prediction result.
2. The method according to claim 1, wherein the method comprises the following steps:
the time sequence of the original sampling power load is p ═ { p (i) ═ 1, 2.., N }, wherein N is the number of original power load sampling points;
the average power load time sequence is p ' ═ { p ' (j), j is 1,2, as, M }, wherein M is the number of sampling points of the average power load sequence after conversion according to the forecast interval requirement, and the average value of p ' is p ' ═ p ' (j), j is 1,2Order to
The distance sequence of the average power load time sequence is
3. The method according to claim 2, wherein the method comprises the following steps:
the significant period sequence is { P }1,P2,…,Pk,…,PKWhere K is the number of significant periodic sequences implied in P, Pk={Pk(1),Pk(2),…,Pk(M)},Wherein P isk(1),Pk(2),…,Pk(M) are respectively a significant periodic sequence PkA value of (d);
the residual sequence is R ═ P-P1-P2-…-PK。
4. The method for power load prediction optimization based on continuous power spectrum analysis according to any one of claims 1-3, wherein: the method for extracting the significant period sequence from the flat sequence of the average power load time sequence by adopting the continuous power spectrum analysis method specifically comprises the following steps: and analyzing the significant periodic bands of the average power load time sequence from the flat sequence by using a continuous power spectrum method, and extracting the time sequences corresponding to the significant periodic bands by using a frequency domain filtering method of fast Fourier transform so as to obtain the significant periodic sequences.
5. The method according to claim 3, wherein the method comprises the following steps: the specific process of predicting the significant periodic sequence by adopting the BP neural network optimized based on the particle swarm optimization is as follows:
(1) establishing a 3-layer BP neural network model according to Kolmogorov theorem, and setting the number of neurons in an input layer as I, the number of neurons in a hidden layer as H and the number of neurons in an output layer as O; wherein, H2I +1, O1;
(2) determining parameters needing optimization, including: the number I of neurons in the input layer of the BP neural network and the length L of the training set further comprise: w (1), W (2),. once, W (q)), q ═ H + O, where W (1) -W (I × H) are the connection weights from the input layer to the hidden layer neurons of the BP neural network, W (I × H +1) -W (I × H + O) are the connection weights from the hidden layer to the output layer neurons of the BP neural network, W (I × H + O + H) are the thresholds from the hidden layer to the output layer neurons of the BP neural network, and W (I × H + O + H +1) -W (I × H + O) are the thresholds from the hidden layer neurons of the BP neural network;
(3) initializing a populationWherein Q1Is the total number of particles, the ith particle is Xi=(Ii,Wi,Li) The particle velocity isWherein Ii、Wi、LiA set of alternative solutions for parameter I, W, L;
(4) according to each particle X in the populationi=(Ii,Wi,Li) Constructing input and output matrices of a training set of the BP neural network for the determined parameters, wherein the significant period sequence P iskAnd BP neural network input layer neuron number IiFirst, a matrix Z is established1And Z2Wherein:
<mrow> <msub> <mi>Z</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </msub> </mrow>
<mrow> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>M</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> </mrow> </msub> </mrow>
training set length L, Z for neural network to be optimized1Middle last LiInput matrix I with columns as training settrain,Z2Middle last LiOutput matrix O with columns as training settrain(ii) a Taking the forecast step length l as the test step length, Z1The last column in the test set is used as an input matrix I of the test settest,Z2The last column in the test set is used as the output matrix O of the test settest(ii) a Taking the sum of the squares of errors of the simulation results of the BP neural network constructed according to the training set as the fitness value of the test set, taking the minimum fitness value as the optimization direction as the evaluation standard to judge the advantages and disadvantages of each particle, and recording the particle XiCurrent individual extremum is Pbest(i) Taking P in the populationbest(i) Most preferably oneVolume as a whole extreme Gbest;
(5) Each particle X in the populationiUpdating the position and the speed of the mobile terminal respectively;
<mrow> <msubsup> <mi>V</mi> <mi>i</mi> <mrow> <mi>g</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>&omega;V</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>+</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <msub> <mi>r</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>)</mo> <mo>+</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msub> <mi>r</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
<mrow> <msubsup> <mi>X</mi> <mi>i</mi> <mrow> <mi>g</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mi>i</mi> <mrow> <mi>g</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> </mrow>
in the formula: omega is the inertial weight, c1、c2Is an acceleration factor, g is the current iteration number, r1、r2Is distributed in [0,1 ]]The random number of (2);
(6) recalculating the objective function value of each particle at the moment, and updating Pbest(i) And Gbest;
(7) Judging whether the maximum iteration times is reached, if so, ending the optimization process, and obtaining a parameter optimal value (I) obtained by the particle swarm optimizationbest,Wbest(wbest(1),wbest(2),...,wbest(q)),Lbest) Otherwise, returning to the step (4);
(8) according to Ibest、Wbest(wbest(1),wbest(2),...,wbest(q))、LbestConstructing BP neural network training set Z3And test set Z4And initializing a BP neural network connection weight and a threshold, wherein:
<mrow> <msub> <mi>Z</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>2</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>2</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>3</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>*</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </msub> <mo>,</mo> </mrow>
<mrow> <msub> <mi>Z</mi> <mn>4</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <msub> <mi>P</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>M</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </msub> </mrow>
wbest(1)~wbest(I × H) is the initial value of the link weights from the input layer to the hidden layer neurons of the BP neural network, wbest(I*H+1)~wbest(I H + H O) is the initial value of the link weights from hidden layer to output layer neurons of the BP neural network, wbest(I*H+H*O+1)~wbest(I H + H O + H) is the initial value of the threshold of hidden layer neuron of BP neural network, wbest(I*H+H*O+H+1)~wbestAnd (I H + H O + H + O) is an initial value of the threshold value of the neuron of the output layer of the BP neural network, so that a BP neural network model is established, iterative prediction is carried out in one step after training, and a corresponding prediction result is obtained.
6. The method according to claim 3, wherein the method comprises the following steps: the method adopts the RBF neural network optimized by the particle swarm to predict the primary difference sequence of the residual sequence, and comprises the following specific processes:
(1) determining parameters to be optimized, including: the number I of neurons in an input layer of the RBF neural network and the length L of a training set are calculated;
(2) initializing a populationWherein Q2Is the total number of particles, the ith particle is Xi=(Ii,Li) The particle velocity isWherein Ii,LiA set of alternative solutions for parameter I, L;
(3) according to each particle X in the populationi(Ii,Li) Constructing input and output matrixes of a training set of the RBF neural network according to the determined parameters, wherein the input and output matrixes are used for the residual sequence R and the number I of neurons in the input layer of the RBF neural networkiFirst, a matrix Z is established5And Z6Wherein:
<mrow> <msub> <mi>Z</mi> <mn>5</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>...</mo> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mo>...</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>*</mo> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </msub> </mrow>
<mrow> <msub> <mi>Z</mi> <mn>6</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>M</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> </mrow> </msub> </mrow>
training set length L, Z for neural network to be optimized5Middle last LiInput matrix I with columns as training settrain,Z6Middle last LiOutput matrix O with columns as training settrain(ii) a Taking the forecast step length l as the test step length, Z5The last column in the test set is used as an input matrix I of the test settest,Z6The last column in the test set is used as the output matrix O of the test settest(ii) a The sum of the squares of errors of the simulation results of the test set by the RBF neural network constructed according to the training set is used as the fitness value of the test set, the minimum fitness value is used as the optimization direction to be used as the evaluation standard to judge the quality of each particle, and the particle X is recordediCurrent individual extremum is Pbest(i) Taking P in the populationbest(i) Optimal individual as global extremum Gbest;
(4) Each particle X in the populationiUpdating the position and the speed of the mobile terminal respectively;
<mrow> <msubsup> <mi>V</mi> <mi>i</mi> <mrow> <mi>g</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>&omega;V</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>+</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <msub> <mi>r</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>)</mo> <mo>+</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msub> <mi>r</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
<mrow> <msubsup> <mi>X</mi> <mi>i</mi> <mrow> <mi>g</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>g</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mi>i</mi> <mrow> <mi>g</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> </mrow>
in the formula: omega is the inertial weight, c1、c2For the acceleration factor, g is the current iteration number, and r1、r2Is distributed in [0,1 ]]The random number of (2);
(5) recalculating the objective function value of each particle at the moment, and updating Pbest(i) And Gbest;
(6) Judging whether the maximum iteration times is reached, if so, ending the optimizationObtaining the optimal value of the parameter (I) obtained by the optimization of the particle swarm optimizationbest,Lbest) Otherwise, returning to the step (3).
(7) According to IbestAnd LbestConstructing RBF neural network training set Z7And test set Z8Wherein:
<mrow> <msub> <mi>Z</mi> <mn>7</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>2</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>2</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>3</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>*</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </msub> <mo>,</mo> </mrow>
<mrow> <msub> <mi>Z</mi> <mn>8</mn> </msub> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>M</mi> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>M</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <msub> <mi>L</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </msub> </mrow>
and establishing an RBF neural network model, training, performing iterative prediction in one step, and obtaining a corresponding prediction result.
7. The method according to claim 5 or 6, wherein the method comprises the following steps: the inertia weight ω is 0.5, and the acceleration factor c1=c2=1.49445。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710477986.5A CN107301475A (en) | 2017-06-21 | 2017-06-21 | Load forecast optimization method based on continuous power analysis of spectrum |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710477986.5A CN107301475A (en) | 2017-06-21 | 2017-06-21 | Load forecast optimization method based on continuous power analysis of spectrum |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107301475A true CN107301475A (en) | 2017-10-27 |
Family
ID=60135949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710477986.5A Pending CN107301475A (en) | 2017-06-21 | 2017-06-21 | Load forecast optimization method based on continuous power analysis of spectrum |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107301475A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871157A (en) * | 2017-11-08 | 2018-04-03 | 广东工业大学 | Data predication method, system and relevant apparatus based on BP and PSO |
CN108182490A (en) * | 2017-12-27 | 2018-06-19 | 南京工程学院 | A kind of short-term load forecasting method under big data environment |
CN108694023A (en) * | 2018-02-22 | 2018-10-23 | 长安大学 | A kind of test method of marshal piece stability and flow valuve |
CN108918932A (en) * | 2018-09-11 | 2018-11-30 | 广东石油化工学院 | Power signal adaptive filter method in load decomposition |
CN108959704A (en) * | 2018-05-28 | 2018-12-07 | 华北电力大学 | A kind of rewards and punishments weight type simulation sequence similarity analysis method considering metamorphosis |
CN109543879A (en) * | 2018-10-22 | 2019-03-29 | 新智数字科技有限公司 | Load forecasting method and device neural network based |
CN109935333A (en) * | 2019-03-07 | 2019-06-25 | 东北大学 | Online blood glucose prediction method based on OVMD-SE-PSO-BP |
CN114492090A (en) * | 2022-04-12 | 2022-05-13 | 中国气象局公共气象服务中心(国家预警信息发布中心) | Road surface temperature short-term forecasting method |
CN115018055A (en) * | 2022-06-17 | 2022-09-06 | 沃太能源股份有限公司 | Creation method, prediction device, electronic device and storage medium |
CN116341681A (en) * | 2023-03-31 | 2023-06-27 | 国网江苏省电力有限公司扬州供电分公司 | Low-voltage photovoltaic user power generation load model training and predicting method |
CN117630476A (en) * | 2024-01-26 | 2024-03-01 | 上海懿尚生物科技有限公司 | Real-time monitoring method and system for power load suitable for animal laboratory |
CN118211789A (en) * | 2024-03-19 | 2024-06-18 | 国网江苏省电力有限公司南通供电分公司 | Load response potential sensing method and system for iron and steel enterprises |
-
2017
- 2017-06-21 CN CN201710477986.5A patent/CN107301475A/en active Pending
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871157B (en) * | 2017-11-08 | 2020-06-09 | 广东工业大学 | Data prediction method, system and related device based on BP and PSO |
CN107871157A (en) * | 2017-11-08 | 2018-04-03 | 广东工业大学 | Data predication method, system and relevant apparatus based on BP and PSO |
CN108182490A (en) * | 2017-12-27 | 2018-06-19 | 南京工程学院 | A kind of short-term load forecasting method under big data environment |
CN108694023A (en) * | 2018-02-22 | 2018-10-23 | 长安大学 | A kind of test method of marshal piece stability and flow valuve |
CN108694023B (en) * | 2018-02-22 | 2021-04-27 | 长安大学 | Method for testing stability and flow value of Marshall test piece |
CN108959704B (en) * | 2018-05-28 | 2022-10-14 | 华北电力大学 | Rewarding and punishing weight type simulation sequence similarity analysis method considering morphological change |
CN108959704A (en) * | 2018-05-28 | 2018-12-07 | 华北电力大学 | A kind of rewards and punishments weight type simulation sequence similarity analysis method considering metamorphosis |
CN108918932A (en) * | 2018-09-11 | 2018-11-30 | 广东石油化工学院 | Power signal adaptive filter method in load decomposition |
CN108918932B (en) * | 2018-09-11 | 2021-01-15 | 广东石油化工学院 | Adaptive filtering method for power signal in load decomposition |
CN109543879A (en) * | 2018-10-22 | 2019-03-29 | 新智数字科技有限公司 | Load forecasting method and device neural network based |
CN109935333A (en) * | 2019-03-07 | 2019-06-25 | 东北大学 | Online blood glucose prediction method based on OVMD-SE-PSO-BP |
CN109935333B (en) * | 2019-03-07 | 2022-12-09 | 东北大学 | OVMD-SE-PSO-BP-based online blood glucose prediction method |
CN114492090A (en) * | 2022-04-12 | 2022-05-13 | 中国气象局公共气象服务中心(国家预警信息发布中心) | Road surface temperature short-term forecasting method |
CN115018055A (en) * | 2022-06-17 | 2022-09-06 | 沃太能源股份有限公司 | Creation method, prediction device, electronic device and storage medium |
CN115018055B (en) * | 2022-06-17 | 2024-10-15 | 沃太能源股份有限公司 | Creation method, prediction method, apparatus, electronic device, and storage medium |
CN116341681A (en) * | 2023-03-31 | 2023-06-27 | 国网江苏省电力有限公司扬州供电分公司 | Low-voltage photovoltaic user power generation load model training and predicting method |
CN117630476A (en) * | 2024-01-26 | 2024-03-01 | 上海懿尚生物科技有限公司 | Real-time monitoring method and system for power load suitable for animal laboratory |
CN117630476B (en) * | 2024-01-26 | 2024-03-26 | 上海懿尚生物科技有限公司 | Real-time monitoring method and system for power load suitable for animal laboratory |
CN118211789A (en) * | 2024-03-19 | 2024-06-18 | 国网江苏省电力有限公司南通供电分公司 | Load response potential sensing method and system for iron and steel enterprises |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107301475A (en) | Load forecast optimization method based on continuous power analysis of spectrum | |
CN110414045B (en) | Short-term wind speed prediction method based on VMD-GRU | |
Tian et al. | Multi-step short-term wind speed prediction based on integrated multi-model fusion | |
CN109492808B (en) | Method for predicting remaining parking spaces of indoor parking lot | |
CN102270309B (en) | Short-term electric load prediction method based on ensemble learning | |
CN107704953A (en) | The short-term wind-electricity power probability density Forecasting Methodology of EWT quantile estimate forests | |
CN111160520A (en) | BP neural network wind speed prediction method based on genetic algorithm optimization | |
CN107392364A (en) | The short-term load forecasting method of variation mode decomposition and depth belief network | |
CN111027775A (en) | Step hydropower station generating capacity prediction method based on long-term and short-term memory network | |
CN109583621A (en) | A kind of PSO-LSSVM short-term load forecasting method based on improvement variation mode decomposition | |
CN106933778A (en) | A kind of wind power combination forecasting method based on climbing affair character identification | |
CN106295899B (en) | Wind power probability density Forecasting Methodology based on genetic algorithm Yu supporting vector quantile estimate | |
CN110309603A (en) | A kind of short-term wind speed forecasting method and system based on wind speed characteristics | |
CN110766200A (en) | Method for predicting generating power of wind turbine generator based on K-means mean clustering | |
CN109726802B (en) | Machine learning prediction method for wind speed in railway and wind farm environment | |
CN111353652A (en) | Wind power output short-term interval prediction method | |
CN108717579B (en) | Short-term wind power interval prediction method | |
CN107609671A (en) | A kind of Short-Term Load Forecasting Method based on composite factor evaluation model | |
CN109508826B (en) | Electric vehicle cluster schedulable capacity prediction method based on gradient lifting decision tree | |
CN114091766B (en) | CEEMDAN-LSTM-based space load prediction method | |
CN105243461A (en) | Short-term load forecasting method based on artificial neural network improved training strategy | |
CN102509026A (en) | Comprehensive short-term output power forecasting model for wind farm based on maximum information entropy theory | |
CN107169612A (en) | The prediction of wind turbine active power and error revising method based on neutral net | |
CN111967183A (en) | Method and system for calculating line loss of distribution network area | |
CN104036328A (en) | Self-adaptive wind power prediction system and prediction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171027 |
|
RJ01 | Rejection of invention patent application after publication |