CN106709820A - Electrical power system load prediction method and device based on depth belief network - Google Patents
Electrical power system load prediction method and device based on depth belief network Download PDFInfo
- Publication number
- CN106709820A CN106709820A CN201710021315.8A CN201710021315A CN106709820A CN 106709820 A CN106709820 A CN 106709820A CN 201710021315 A CN201710021315 A CN 201710021315A CN 106709820 A CN106709820 A CN 106709820A
- Authority
- CN
- China
- Prior art keywords
- layer
- training
- training sample
- load
- visible
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012549 training Methods 0.000 claims abstract description 106
- 238000012360 testing method Methods 0.000 claims abstract description 25
- 238000009826 distribution Methods 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 21
- 239000006185 dispersion Substances 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 5
- 239000000203 mixture Substances 0.000 claims description 4
- 239000004744 fabric Substances 0.000 claims 1
- 239000010410 layer Substances 0.000 description 113
- 238000013528 artificial neural network Methods 0.000 description 21
- 210000002569 neuron Anatomy 0.000 description 12
- 230000007786 learning performance Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000011229 interlayer Substances 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000005303 weighing Methods 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 1
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000009514 concussion Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013277 forecasting method Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000008667 sleep stage Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000000714 time series forecasting Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Water Supply & Treatment (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention provides an electrical power system load prediction method and device based on a depth belief network (DBN) and relates to the field of electric power systems. Through the method and device, the convergence rate can be increased, and the prediction error can be lowered. According to the specific scheme, the method comprises the steps that a training sample and a test sample are acquired; an energy function of an RBM model is constructed; the training sample is utilized to perform layer-by-layer training on at least one hidden layer and visible layer to obtain weights of the training sample among nodes of the hidden layers and the visible layers; and a prediction value of an electrical power system load is obtained according to output data obtained through the training sample and the DBN obtained after test sample input is trained. The method and device are used for electrical power system load prediction.
Description
Technical field
Embodiments of the invention are related to field of power, more particularly to a kind of power system based on depth confidence network
Load forecasting method and device.
Background technology
Load Prediction In Power Systems are the important components of Power System Planning, are also the base of Economical Operation of Power Systems
Plinth, it takes into full account the influence of the correlative factors such as politics, economy, weather, predicts following use from known need for electricity
Electric demand.Accurate load prediction data contributes to dispatching of power netwoks control and safe operation, formulates rational power construction planning
And improve the economic benefit and social benefit of power system.In load prediction, frequently with regression model, time series forecasting
The method such as technology and Grey Theory Forecast technology.In recent years, with the rise that artificial neural network is studied, entered using neutral net
Row Load Prediction In Power Systems, make predicated error have substantial degradation, and cause the great attention of load prediction worker.
The load prediction of power system is carried out using traditional neural network in the prior art.However, traditional neural network
It is a kind of typical global Approximation Network, one or more weights of network have an impact to each output;On the other hand, network
After training every time it is determined that there is randomness during weights, is caused input, outlet chamber relation it is indefinite, predict the outcome and have differences.
It can be seen that the scheme of prior art has that convergence rate is slow and predicated error is big.
The content of the invention
Embodiments of the invention provide a kind of Load Prediction In Power Systems method and device based on depth confidence network, energy
Convergence rate is enough improved, predicated error is reduced.
In order to reach above-mentioned purpose, embodiments herein is adopted the following technical scheme that:
A kind of first aspect, there is provided Load Prediction In Power Systems method based on DBN, DBN is limited Boltzmann by multilayer
Machine RBM is constituted, including at least one hidden layer and a visible layer, and the Load Prediction In Power Systems method includes:
Obtain training sample and test sample;Construct the energy function of RBM models;
At least one hidden layer and visible layer are successively trained using the training sample, the training sample is obtained in institute
State the weights between at least one hidden layer and visible node layer;
The output data that will be obtained by the training sample, and test sample input is obtained by the DBN after training
To the predicted value to power system load.
A kind of second aspect, there is provided Load Prediction In Power Systems device based on DBN, is provided for performing first aspect
Method.
The Load Prediction In Power Systems method and device based on depth confidence network that embodiments of the invention are provided, profit
Depth confidence network, including some hidden layers and a visible layer are constituted with the limited Boltzmann machine of multilayer, can be by using layer
Secondary unsupervised greedy pre-training method layering pre-training RBM, and the result that will be obtained trains probabilistic model as supervised learning
Initial value, so as to greatly improve learning performance, improves convergence rate, reduces predicated error.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, embodiment will be described below
Needed for the accompanying drawing to be used be briefly described, it should be apparent that, drawings in the following description are only more of the invention
Embodiment, for those of ordinary skill in the art, on the premise of not paying creative work, can also be attached according to these
Figure obtains other accompanying drawings.
The Load Prediction In Power Systems method flow based on depth confidence network that Fig. 1 is provided by embodiments of the invention
Schematic diagram;
Fig. 2 includes an explanation schematic diagram when visible layer and a hidden layer for RBM;
Fig. 3 illustrates for the explanation in embodiments of the invention to the optimal error recognition rate under three kinds of model different structures
Figure;
The curve map changed with iterations to average absolute percent error in Fig. 4 embodiments of the invention;
The structural representation of the Load Prediction In Power Systems device based on DBN that Fig. 5 embodiments of the invention are provided.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
Load prediction convergence rate for traditional neural network is slow, the big problem of predicated error, set forth herein a kind of base
In depth confidence network (English full name:Deep Belief Network, English abbreviation:DBN Load Prediction In Power Systems side)
Case, with reference to shown in Fig. 1, the method is comprised the following steps:
101st, training sample and test sample are obtained.
102nd, the energy function of RBM models is constructed.
103rd, at least one hidden layer and visible layer are successively trained using training sample, training sample is obtained hidden at least one
Weights between layer and visible node layer.
104th, the output data that will be obtained by training sample, and test sample is input into by the DBN after training, obtains right
The predicted value of power system load.
In specific embodiment of the invention, DBN is by limited Boltzmann machine (the English full name of multilayer:Restricted
Boltzmann Machine, RBM) constitute, including at least one hidden layer and a visible layer, by unsupervised greedy using level
Greedy pre-training method layering pre-training RBM, and the result that will be obtained trains the initial value of probabilistic model as supervised learning, so that
Greatly improve learning performance, be described as follows:
First, data prediction
S1, training sample is pre-processed:
Assuming that all training samples are X={ X1,X2,...,XM, wherein XiOne group of load data is represented, M represents load number
According to number, for input sample, algorithm is normalized using 4 steps:
1. average is calculated:
Wherein, u represents the average of sample.
2. variance is calculated:
Wherein, δ represents the variance of sample.
3. albefaction:
Wherein, Xi' represent sample data whitening parameters.
4. normalize:
Wherein, Xi,nRepresent the normalized parameter of sample data;
S2, test sample is pre-processed:
The pretreatment of test sample needs average and variance albefaction according to training sample by it, then according to training sample
Maximum, minimum value uniformly normalized between 0 to 1, it is assumed that single test sample is T, then normalization step is:
1. albefaction:
Wherein, T' represents the whitening parameters of test sample data, and T is single test sample, and u represents the equal of training sample
Value, δ represents the variance of training sample.
2. normalize:
Wherein, TnRepresent the normalized parameter of test sample data.
Herein, the reason for albefaction because there is larger correlation between natural data adjacent element, therefore by albefaction
The redundancy of data can be reduced, similar to principal component analysis (English full name:Principal Component Analysis,
Claim between English:PCA) dimensionality reduction.
2nd, the general principle and composition of depth confidence network
DBN is a kind of energy model, and its feature learning ability is very powerful, is a kind of depth just studied very early
Network.Its learning method is proposed by Hinton et al., is substantially by building the machine learning model with many hidden layers
Learn more useful feature with the training data of magnanimity, so that the finally accuracy of lifting classification or prediction.
DBN can regard the Complex Neural Network being made up of multilayer RBM as, and it includes a visible layer and some hidden layers, adopts
The pre- weights of RBM are trained with the method to sdpecific dispersion.Deep neural network is pre- with the unsupervised greedy pre-training method layering of level
Training RBM, the result that will be obtained trains the initial value of probabilistic model, learning performance to be greatly improved as supervised learning.
RBM is the stochastic neural net of production, for learning the probability distribution on input data, can be as needed
Trained using in supervision and non-supervisory two ways, it has extensive utilization at many aspects.RBM as shown in Figure 2 has two layers of mesh
Network is expressed as:Vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element;Vector v=(v1,v2,…,vn) table
Show visible layer, have n element.It is mutual between interior each element of layer because being connectionless between layer inner element
Independent, that is, have:
P (h | v)=p (h1|v)p(h2|v)…p(hm|v);
P (v | h)=p (v1|h)p(v2|h)…p(vn|h)。
The distribution of general visible layer and Hidden unit meets Bernoulli Jacob's form, then have:
It follows that by each layer of probability distribution can to obtain the probability distribution of whole model, such that it is able to by
Layer carries out unsupervised training.Using the output of lower floor RBM as upper strata RBM input.To the last one layer of hidden layer, incites somebody to action last
One layer of hidden layer is input to softmax graders, then calculates evaluated error and is reversely finely tuned.
The energy function of S3, construction RBM models:
The energy function of the RBM models for meeting Bernoulli Jacob's distribution of hidden layer and visible layer is defined as:
Wherein, aiAnd bjRepresent bias term, wijRepresent the connection weight between visible element and Hidden unit.θ=(a, b,
W) be RBM models weighting parameter, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1,
v2,…,vn) visible layer is represented, have n element.
S4, calculating visible layer, the joint probability distribution of Hidden unit:
Setting models parameter, it is seen that, the joint probability distribution of Hidden unit can be expressed as:
Wherein,It is a normalization factor ,-E (v, h) is the energy of the RBM models of hidden layer and visible layer
Flow function, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element.
Can be derived according to formula (7) and formula (8):
In formula,
S5, the edge distribution for calculating visible layer and hidden layer:
According to visible layer and joint probability distribution P (v, h) of hidden layer, edge distribution can be obtained:
Wherein,It is a normalization factor, E (v, h) is the energy of the RBM models of hidden layer and visible layer
Flow function, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element.
S6, construction log-likelihood function are:
Wherein, l is the number of training sample, and θ=(a, b, w) is the weighting parameter of RBM models, P (vi) be visible layer with
The edge distribution of hidden layer.
S7, the gradient for calculating log-likelihood function:
Derivation is carried out to log-likelihood function according to gradient descent method, can be obtained:
Arrangement can be obtained:
Wherein, lnL (θ) is log-likelihood function, and l is the number of training sample, and θ=(a, b, w) is the weights of RBM models
Parameter, and P (h | vi) be visible layer and hidden layer edge distribution, E (v, h) is the energy function of the RBM models of hidden layer and visible layer,
P(h,vi) it is visible layer, the joint probability distribution of Hidden unit.
3rd, using the flow of depth confidence neural network forecast power system load
For deep learning, its thought is exactly to the multiple layers of stacking, that is to say, that the output of this layer is used as next
The input of layer.Than if any a system S, it has n-layer (S1 ... Sn), its input is I, and output is O, is represented by:I=>S1
=>S2=>... ..=>Sn=>O, by adjusting system parameters so that output O is still input I, then just can be automatic
Acquire input I a series of level characteristics, i.e. S1 ..., Sn.In this way, it is possible to achieve input information is entered
Row hierarchical table reaches.It is exactly a mistake for deep learning on process nature using depth confidence neural network forecast power system load
Journey.
Flow using depth confidence neural network forecast power system load is:Using the method training RBM's to sdpecific dispersion
Pre- weights.By analysis, algorithm uses three layers of hidden layer, and uses the unsupervised greedy pre-training method layering pre-training of level
RBM, the result that will be obtained trains the initial value of probabilistic model as supervised learning, so as to improve learning performance.Emulation data show
Show, compared with traditional neural network algorithm, the algorithmic statement is fast, and predicated error is small.
1. the pre- weights of RBM are trained using the method to sdpecific dispersion.KL- divergences are non-symmetrical metrics, so KL (Q | | P)
Value from KL (P | | Q) is different.The difference of two distributions is bigger, and the value of KL- divergences is also bigger.The log-likelihood of RBM models
The maximum of function is finally evolved into two differences of the KL- divergences of probability distribution of calculating:
KL(Q||P)-KL(Pk| | P) (formula 15)
Wherein, Q is prior distribution, PkIt is the distribution after k step Gibbs samplings.If Gibbs chains reach plateau (due to
Original state v(0)=v, has been plateau), then there is Pk=P, namely KL (Pk| | P)=0.So sdpecific dispersion algorithm is obtained
Evaluated error be equal to 0.
2. the result that will be obtained trains the initial value of probabilistic model as supervised learning, and it is normal that relative entropy is also called KL- divergences
For weighing two distances of probability distribution.The KL- divergences that two probability Q and P are distributed in state space are defined as follows:
KL (Q | | P) is the KL- divergences that two probability Q and P are distributed in state space.
3. the pre- weights and initial value that will be obtained are transferred to next layer as input data, are instructed again in next layer
Practice.One layer network of training, successively trains every time.Specifically, first training ground floor with without nominal data, first is first learnt during training
The parameter (this layer can be regarded as obtaining a hidden layer for causing to export and be input into the minimum three-layer neural network of difference) of layer,
After study obtains (n-1)th layer, using n-1 layers of output as the input of n-th layer, n-th layer is trained, thus respectively obtain each layer
Parameter.Therefore, training process is exactly a process for iteration.After having trained for all layers, wake-sleep algorithms are used
Carry out tuning.
The wake stages:Cognitive process, produces each layer to take out by extraneous feature and upward weight (cognitive weight)
As representing (node state), and decline the descending weight (generation weight) of modification interlayer using gradient.
The sleep stages:Generating process, is represented and downward weight, the state of generation bottom, while changing interlayer by top layer
Upward weight.
4. after pre-training, DBN can be gone to differentiating that performance adjusts by using tape label data BP algorithm.At this
In, a tally set will be affixed to top layer (popularization associative memory), by bottom-up, the identification weights for learning
Obtain a classifying face for network.This performance can be better than the network that simple BP algorithm is trained.This can intuitively explain very much,
The BP algorithm of DBNs is only needed to carry out weighting parameter space one search of part, and this is instructed compared to for feedforward neural network
White silk wants fast, and the convergent time is also few.
5. after training and tuning step terminate, in highest two-layer (i.e. last layer hidden layer and visible layer), weights are connected
It is connected to together, the output of such lower level will provide for the clue of reference or associate to top layer, such top layer will be by
Its memory content for relating to it.So as to carry out the load value prediction of power system.
In a kind of specific embodiment, S8:Visible layer, hidden layer two probability Q and P are calculated according to formula 16 and is distributed in shape
KL- divergences in state space;S9:The maximum of the log-likelihood function of RBM models is finally evolved into by calculating two according to formula 15
The difference of the KL- divergences of individual probability distribution Q and P.S10:S1-S9 is to carry out order training method to training sample by RBM models, is obtained
Weights of the training sample between visible node layer and hidden node, weights are input to visible layer from hidden layer, it is seen that layer can remember this
Weights content simultaneously draws the predicted load of training sample, for the prediction process of the load of power system.S11, test sample
The average load value of the training sample obtained plus S10, peak load value, temperature on average, the prediction day of numerical weather forecast are put down
Equal temperature as deep neural network algorithm input data, by the calculating of S1-S9, the data of output are prediction power train
The load value of system.
4th, Setup Experiments
Eastern Utilities Electric Co. of Slovakia in January, 2007~October load number that Road test is provided using EUNITE contests
According to training data is made, November~December data are used as prediction correction data.The input data of wherein neutral net includes:The previous day
48 sampled points (the average load data per half an hour), the mean temperature of the previous day, the previous day load peak, the previous day loads
Valley, previous day average, prediction mean daily temperature.48 point datas that prediction algorithm disposably predicts second day.
In the application of load prediction, the load prediction data per half an hour, algorithm in 0 hour~24 hours are generally required
Take and predict that 0 point~24 points of the previous day day has 48 historical datas altogether, add the equal load value of prediction day previous balance, peak load
Value, temperature on average, numerical weather forecast prediction daily mean temperature as deep neural network algorithm input data.Output number
According to the load data for being 0 point~24 points of prediction day per half an hour.
For comparison algorithm performance, BP neural network, Self-organized Fuzzy Neural Network (English full name have been selected:Self-
Organizing Fuzzy Neural Network, English abbreviation:SOFNN) contrasted with depth confidence network.BP nerves
Network has the characteristic of the good non-linear rule of seizure, and when the neuron of hidden layer is enough, 3 layers of perceptron model can be real
Incumbent what nonlinear function is approached.The major advantage of SOFNN algorithms is that can automatically determine network structure and be given
Model parameter, with good precision of prediction.
Judgment criteria is average absolute percent error (MAPE), is defined as follows:
Wherein, DiWithThe actual value and predicted value of respectively i-th day certain month 1997 peak load, n are certain moon in 1997
Number of days.
5th, performance evaluation
(1) influence of the network structure to algorithm performance
First, influence of the experimental study network structure (hidden layer is neuron number) to prediction effect, takes in January, 2007
~October load data makees training data, and November, load data was used as test data.Network structure is implied using 1,2,3 layers respectively
Layer.Neuron number uses 20,50,100,200,400 respectively.Herein, compare for convenience, every layer of neuron number is identical.
For the setting of initial parameter, the selection of following parameters will manually select to obtain optimal identification rate from the range of these:BP
Habit rate (0.1,0.05,0.02,0.01,0.005), pre-training learning rate (0.01,0.005,0.002,0.001).Fig. 3 shows
Optimal error recognition rate under three kinds of model different structures.From the figure 3, it may be seen that when the implicit number of plies is less or neuron number is less
When BP network performances not by pre-training it is more excellent, when only 1 layer of hidden layer, SOFNN when neuron number reaches 200
Error rate is just suitable with BP, and when the implicit number of plies is 2 layers and 3 layers, the property of SOFNN when neuron number reaches 100 and 50
To just can approach and more than BP.DBN similarly, it can be found that the performance of SOFNN and DBN phases in the case that neuron number is less
Compare excellent, it is then opposite in the case that neuron number is more.And when the implicit number of plies is identical, because over-fitting problem.BP nets
The performance of network is not, as node number is incremented by, to decline on the contrary when nodes are excessive, and depth model is then incremented by simultaneously always
Tend towards stability, by the performance of the depth network of pre-training, with the expansion of network size, (including the implicit number of plies increases for this explanation
Or nodes increase) gradually optimize.It can be seen that, the very few implicit number of plies and implicit nodes can reduce depth model
Performance.Reason can so explain that the effect of pre-training model is to extract the core feature in input feature vector, due to openness bar
The limitation of part, it is assumed that neuron number is very few, for the input of some input samples, only a small amount of neuron is activated, and
These features cannot represent original input, therefore lost some information content, cause the decline of performance.Although network size
The performance of bigger depth model is better, but simultaneously the training time also increase, it is therefore desirable to weighed with the training time in performance
Weighing apparatus.
(2) Algorithm Convergence compares
Fig. 4 is average absolute percent error with iterations change curve.Algorithm uses three layers of hidden layer, and every layer 100 every
100 model of neuron of layer.Learning rate all takes 0.01 during pre-training.Learning rate takes 0.1 during fine setting, and momentum term coefficient takes 0.5.
In January, 2007~November load data makees training data, and December, load data was used as test data.Compare under iteration 1000 times
The average absolute percent error change of BP, SOFNN, DBN.
It can be seen that with the increase of iterations, the error rate of three kinds of models is all gradually reduced analysis chart 4, this be due to
As the distribution that frequency of training increases network parameter becomes closer to minimum point.But BP networks are reaching certain training time
Error rate occurs shaking and having gradually increased trend after number, and declines by the error rate stabilization of pre-training DBN networks, this
Indirectly demonstrating pre-training can make the distributed areas of network initial parameter be more nearly minimum point, and be effectively prevented from
Local concussion.
Load prediction is that power distribution network administrative decision and the method for operation provide important evidence.The present invention proposes to be put using depth
The Load Forecast Algorithm of communication network, the existing neural network algorithm learning rate of improvement is slower, the low problem of forecasting efficiency.Emulation knot
Fruit shows that, compared to traditional neural network algorithm, the load forecast effect of the algorithm based on depth confidence network is obvious.
Embodiments of the invention also provide a kind of Load Prediction In Power Systems device based on DBN, for performing above-mentioned reality
The Load Prediction In Power Systems method described in example is applied, with reference to shown in Fig. 5, Load Prediction In Power Systems device includes:
Data processing unit 501, for obtaining training sample and test sample;Construct the energy function of RBM models;
Training unit 502, for successively training at least one hidden layer and visible layer using training sample, obtains training sample
Weights between at least one hidden layer and visible node layer;
Predicting unit 503, for the output data that will be obtained by training sample, and test sample input is by after training
DBN, obtain the predicted value to power system load.
Optionally, data processing unit 501, are additionally operable to by load data XiTraining sample X={ the X of composition1,
X2,...,XMBe normalized;Wherein XiOne group of load data is represented, M represents the number of load data;It is additionally operable to test specimens
This average according to training sample and variance albefaction, and according to the maximum of training sample, minimum value uniformly normalize to 0 to 1 it
Between.
Optionally, it is seen that the distribution of layer and Hidden unit meets Bernoulli Jacob's form;
The energy letter of the RBM models for meeting Bernoulli Jacob's distribution of visible layer defined in data processing unit 501 and hidden layer
Number is:
Wherein, aiAnd bjRepresent bias term, wijRepresent the connection weight between visible element and Hidden unit.θ=(a, b,
W) be RBM models weighting parameter, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1,
v2,…,vn) visible layer is represented, have n element.
Optionally, training unit 502, the pre- weights specifically for training RBM using the device to sdpecific dispersion, by what is obtained
Result trains the initial value of probabilistic model as supervised learning;The pre- weights and initial value that will be obtained are transmitted as input data
To next layer network, it is trained again in next layer network;Every time training one layer network, successively train at least one hidden layer and
Visible layer.
Optionally, the output data for being obtained by training sample, average load value, peak load at least including training sample
Value, temperature on average, the prediction daily mean temperature of numerical weather forecast.
The Load Prediction In Power Systems method and device based on depth confidence network that embodiments of the invention are provided, profit
Depth confidence network, including some hidden layers and a visible layer are constituted with the limited Boltzmann machine of multilayer, can be by using layer
Secondary unsupervised greedy pre-training method layering pre-training RBM, and the result that will be obtained trains probabilistic model as supervised learning
Initial value, so as to greatly improve learning performance, improves convergence rate, reduces predicated error, improves existing neural network algorithm
Practise speed slower, the low problem of forecasting efficiency.Simulation result shows, compared to traditional neural network algorithm, based on depth confidence
The load forecast effect of the algorithm of network is obvious.
More than, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any to be familiar with
Those skilled in the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all cover
Within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Claims (10)
1. a kind of Load Prediction In Power Systems method based on depth confidence network DBN, it is characterised in that DBN is limited by multilayer
Boltzmann machine RBM is constituted, including at least one hidden layer and a visible layer, and the Load Prediction In Power Systems method includes:
Obtain training sample and test sample;Construct the energy function of RBM models;
Successively train at least one hidden layer and visible layer using the training sample, obtain the training sample it is described extremely
Few weights between a hidden layer and visible node layer;
The output data that will be obtained by the training sample, and test sample input obtains right by the DBN after training
The predicted value of power system load.
2. Load Prediction In Power Systems method according to claim 1, it is characterised in that the energy of the construction RBM models
Before flow function, also include:
To by load data XiTraining sample X={ the X of composition1,X2,...,XMBe normalized;Wherein XiRepresent one group of load
Data, M represents the number of load data;
The average according to the training sample and variance albefaction by the test sample, and according to the training sample it is maximum,
Minimum value is uniformly normalized between 0 to 1.
3. Load Prediction In Power Systems method according to claim 1, it is characterised in that visible layer and Hidden unit point
Cloth meets Bernoulli Jacob's form;The energy function of the construction RBM models, including:
The energy function of the RBM models for meeting Bernoulli Jacob's distribution of visible layer and hidden layer is defined as:
Wherein, aiAnd bjRepresent bias term, wijThe connection weight between visible element and Hidden unit is represented, θ=(a, b, w) is
The weighting parameter of RBM models, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1,v2,…,
vn) visible layer is represented, have n element.
4. Load Prediction In Power Systems method according to claim 3, it is characterised in that described to utilize the training sample
At least one hidden layer and visible layer are successively trained, the training sample is obtained at least one hidden layer and visible layer section
Weights between point, including:
The pre- weights of RBM are trained using the method to sdpecific dispersion, the result that will be obtained trains probabilistic model as supervised learning
Initial value;
The pre- weights and initial value that will be obtained are transferred to next layer network as input data, are carried out again in next layer network
Training;
One layer network of training, successively trains at least one hidden layer and visible layer every time.
5. Load Prediction In Power Systems method according to claim 3, it is characterised in that
The output data obtained by the training sample, at least the average load value including the training sample, peak load value,
The prediction daily mean temperature of temperature on average, numerical weather forecast.
6. a kind of Load Prediction In Power Systems device based on depth confidence network DBN, it is characterised in that DBN is limited by multilayer
Boltzmann machine RBM is constituted, including at least one hidden layer and a visible layer, and the Load Prediction In Power Systems device includes:
Data processing unit, for obtaining training sample and test sample;Construct the energy function of RBM models;
Training unit, for successively training at least one hidden layer and visible layer using the training sample, obtains the instruction
Practice weights of the sample between at least one hidden layer and visible node layer;
Predicting unit, for the output data that will be obtained by the training sample, and test sample input is by training
DBN afterwards, obtains the predicted value to power system load.
7. Load Prediction In Power Systems device according to claim 6, it is characterised in that
The data processing unit, is additionally operable to by load data XiTraining sample X={ the X of composition1,X2,...,XMReturned
One changes;Wherein XiOne group of load data is represented, M represents the number of load data;It is additionally operable to the test sample according to described
The average of training sample and variance albefaction, and uniformly normalized between 0 to 1 according to the maximum of the training sample, minimum value.
8. Load Prediction In Power Systems device according to claim 6, it is characterised in that
The distribution of visible layer and Hidden unit meets Bernoulli Jacob's form;
The energy function of the RBM models for meeting Bernoulli Jacob's distribution of visible layer defined in the data processing unit and hidden layer
For:
Wherein, aiAnd bjRepresent bias term, wijThe connection weight between visible element and Hidden unit is represented, θ=(a, b, w) is
The weighting parameter of RBM models, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1,v2,…,
vn) visible layer is represented, have n element.
9. Load Prediction In Power Systems device according to claim 8, it is characterised in that
The training unit, specifically for using the pre- weights for training to the device of sdpecific dispersion RBM, the result that will be obtained is used as prison
Superintend and direct the initial value of learning training probabilistic model;The pre- weights and initial value that will be obtained are transferred to next layer of net as input data
Network, is trained again in next layer network;One layer network of training, successively trains at least one hidden layer and visible every time
Layer.
10. Load Prediction In Power Systems device according to claim 8, it is characterised in that
The output data obtained by the training sample, at least the average load value including the training sample, peak load value,
The prediction daily mean temperature of temperature on average, numerical weather forecast.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710021315.8A CN106709820A (en) | 2017-01-11 | 2017-01-11 | Electrical power system load prediction method and device based on depth belief network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710021315.8A CN106709820A (en) | 2017-01-11 | 2017-01-11 | Electrical power system load prediction method and device based on depth belief network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106709820A true CN106709820A (en) | 2017-05-24 |
Family
ID=58907274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710021315.8A Pending CN106709820A (en) | 2017-01-11 | 2017-01-11 | Electrical power system load prediction method and device based on depth belief network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106709820A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862384A (en) * | 2017-11-16 | 2018-03-30 | 国家电网公司 | A kind of method for building up of distribution network load disaggregated model |
CN107947156A (en) * | 2017-11-24 | 2018-04-20 | 国网辽宁省电力有限公司 | Based on the electric network fault critical clearing time method of discrimination for improving Softmax recurrence |
CN107993012A (en) * | 2017-12-04 | 2018-05-04 | 国网湖南省电力有限公司娄底供电分公司 | A kind of adaptive electric system on-line transient stability appraisal procedure of time |
CN108303624A (en) * | 2018-01-31 | 2018-07-20 | 舒天才 | A kind of method for detection of partial discharge of switch cabinet based on voice signal analysis |
CN108537337A (en) * | 2018-04-04 | 2018-09-14 | 中航锂电技术研究院有限公司 | Lithium ion battery SOC prediction techniques based on optimization depth belief network |
CN108549960A (en) * | 2018-04-20 | 2018-09-18 | 国网重庆市电力公司永川供电分公司 | A kind of 24 hours Methods of electric load forecasting |
CN108646149A (en) * | 2018-04-28 | 2018-10-12 | 国网江苏省电力有限公司苏州供电分公司 | Fault electric arc recognition methods based on current characteristic extraction |
CN109190820A (en) * | 2018-08-29 | 2019-01-11 | 东北电力大学 | A kind of electricity market electricity sales amount depth prediction approach considering churn rate |
CN109214503A (en) * | 2018-08-01 | 2019-01-15 | 华北电力大学 | Project of transmitting and converting electricity cost forecasting method based on KPCA-LA-RBM |
CN109343951A (en) * | 2018-08-15 | 2019-02-15 | 南京邮电大学 | Mobile computing resource allocation methods, computer readable storage medium and terminal |
CN109358230A (en) * | 2018-10-29 | 2019-02-19 | 国网甘肃省电力公司电力科学研究院 | A kind of micro-capacitance sensor is fallen into a trap and the Intelligent electric-energy metering method of m-Acetyl chlorophosphonazo |
CN109358962A (en) * | 2018-08-15 | 2019-02-19 | 南京邮电大学 | The autonomous distributor of mobile computing resource |
CN109579896A (en) * | 2018-11-27 | 2019-04-05 | 佛山科学技术学院 | Underwater robot sensor fault diagnosis method and device based on deep learning |
CN109655711A (en) * | 2019-01-10 | 2019-04-19 | 国网福建省电力有限公司漳州供电公司 | Power distribution network internal overvoltage kind identification method |
CN109871622A (en) * | 2019-02-25 | 2019-06-11 | 燕山大学 | A kind of low-voltage platform area line loss calculation method and system based on deep learning |
CN109949180A (en) * | 2019-03-19 | 2019-06-28 | 山东交通学院 | A kind of the cool and thermal power load forecasting method and system of ship cooling heating and power generation system |
CN110033128A (en) * | 2019-03-18 | 2019-07-19 | 西安科技大学 | Drag conveyor loaded self-adaptive prediction technique based on limited Boltzmann machine |
WO2019141040A1 (en) * | 2018-01-22 | 2019-07-25 | 佛山科学技术学院 | Short term electrical load predication method |
CN110084413A (en) * | 2019-04-17 | 2019-08-02 | 南京航空航天大学 | Safety of civil aviation risk index prediction technique based on PCA Yu depth confidence network |
CN110119837A (en) * | 2019-04-15 | 2019-08-13 | 天津大学 | A kind of Spatial Load Forecasting method based on urban land property and development time |
CN110119826A (en) * | 2018-02-06 | 2019-08-13 | 天津职业技术师范大学 | A kind of power-system short-term load forecasting method based on deep learning |
CN110263995A (en) * | 2019-06-18 | 2019-09-20 | 广西电网有限责任公司电力科学研究院 | Consider the distribution transforming heavy-overload prediction technique of load growth rate and user power utilization characteristic |
CN110543656A (en) * | 2019-07-12 | 2019-12-06 | 华南理工大学 | LED fluorescent powder glue coating thickness prediction method based on deep learning |
CN110782074A (en) * | 2019-10-09 | 2020-02-11 | 深圳供电局有限公司 | Method for predicting user power monthly load based on deep learning |
CN110852522A (en) * | 2019-11-19 | 2020-02-28 | 南京工程学院 | Short-term power load prediction method and system |
CN111667090A (en) * | 2020-03-25 | 2020-09-15 | 国网天津市电力公司 | Load prediction method based on deep belief network and weight sharing |
CN112232547A (en) * | 2020-09-09 | 2021-01-15 | 国网浙江省电力有限公司营销服务中心 | Special transformer user short-term load prediction method based on deep belief neural network |
CN112308342A (en) * | 2020-11-25 | 2021-02-02 | 广西电网有限责任公司北海供电局 | Daily load prediction method based on deep time decoupling and application |
CN112381297A (en) * | 2020-11-16 | 2021-02-19 | 国家电网公司华中分部 | Method for predicting medium-term and long-term electricity consumption in region based on social information calculation |
CN112465664A (en) * | 2020-11-12 | 2021-03-09 | 贵州电网有限责任公司 | AVC intelligent control method based on artificial neural network and deep reinforcement learning |
CN113011645A (en) * | 2021-03-15 | 2021-06-22 | 国网河南省电力公司电力科学研究院 | Power grid strong wind disaster early warning method and device based on deep learning |
CN113177355A (en) * | 2021-04-28 | 2021-07-27 | 南方电网科学研究院有限责任公司 | Power load prediction method |
CN116365519A (en) * | 2023-06-01 | 2023-06-30 | 国网山东省电力公司微山县供电公司 | Power load prediction method, system, storage medium and equipment |
-
2017
- 2017-01-11 CN CN201710021315.8A patent/CN106709820A/en active Pending
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862384A (en) * | 2017-11-16 | 2018-03-30 | 国家电网公司 | A kind of method for building up of distribution network load disaggregated model |
CN107947156B (en) * | 2017-11-24 | 2021-02-05 | 国网辽宁省电力有限公司 | Power grid fault critical clearing time discrimination method based on improved Softmax regression |
CN107947156A (en) * | 2017-11-24 | 2018-04-20 | 国网辽宁省电力有限公司 | Based on the electric network fault critical clearing time method of discrimination for improving Softmax recurrence |
CN107993012A (en) * | 2017-12-04 | 2018-05-04 | 国网湖南省电力有限公司娄底供电分公司 | A kind of adaptive electric system on-line transient stability appraisal procedure of time |
CN107993012B (en) * | 2017-12-04 | 2022-09-30 | 国网湖南省电力有限公司娄底供电分公司 | Time-adaptive online transient stability evaluation method for power system |
WO2019141040A1 (en) * | 2018-01-22 | 2019-07-25 | 佛山科学技术学院 | Short term electrical load predication method |
CN108303624A (en) * | 2018-01-31 | 2018-07-20 | 舒天才 | A kind of method for detection of partial discharge of switch cabinet based on voice signal analysis |
CN110119826A (en) * | 2018-02-06 | 2019-08-13 | 天津职业技术师范大学 | A kind of power-system short-term load forecasting method based on deep learning |
CN108537337A (en) * | 2018-04-04 | 2018-09-14 | 中航锂电技术研究院有限公司 | Lithium ion battery SOC prediction techniques based on optimization depth belief network |
CN108549960A (en) * | 2018-04-20 | 2018-09-18 | 国网重庆市电力公司永川供电分公司 | A kind of 24 hours Methods of electric load forecasting |
CN108646149A (en) * | 2018-04-28 | 2018-10-12 | 国网江苏省电力有限公司苏州供电分公司 | Fault electric arc recognition methods based on current characteristic extraction |
CN109214503B (en) * | 2018-08-01 | 2021-09-10 | 华北电力大学 | Power transmission and transformation project cost prediction method based on KPCA-LA-RBM |
CN109214503A (en) * | 2018-08-01 | 2019-01-15 | 华北电力大学 | Project of transmitting and converting electricity cost forecasting method based on KPCA-LA-RBM |
CN109358962A (en) * | 2018-08-15 | 2019-02-19 | 南京邮电大学 | The autonomous distributor of mobile computing resource |
CN109358962B (en) * | 2018-08-15 | 2022-02-11 | 南京邮电大学 | Mobile computing resource autonomous allocation device |
CN109343951B (en) * | 2018-08-15 | 2022-02-11 | 南京邮电大学 | Mobile computing resource allocation method, computer-readable storage medium and terminal |
CN109343951A (en) * | 2018-08-15 | 2019-02-15 | 南京邮电大学 | Mobile computing resource allocation methods, computer readable storage medium and terminal |
CN109190820B (en) * | 2018-08-29 | 2022-03-18 | 东北电力大学 | Electric power market electricity selling quantity depth prediction method considering user loss rate |
CN109190820A (en) * | 2018-08-29 | 2019-01-11 | 东北电力大学 | A kind of electricity market electricity sales amount depth prediction approach considering churn rate |
CN109358230A (en) * | 2018-10-29 | 2019-02-19 | 国网甘肃省电力公司电力科学研究院 | A kind of micro-capacitance sensor is fallen into a trap and the Intelligent electric-energy metering method of m-Acetyl chlorophosphonazo |
CN109579896A (en) * | 2018-11-27 | 2019-04-05 | 佛山科学技术学院 | Underwater robot sensor fault diagnosis method and device based on deep learning |
CN109655711A (en) * | 2019-01-10 | 2019-04-19 | 国网福建省电力有限公司漳州供电公司 | Power distribution network internal overvoltage kind identification method |
CN109871622A (en) * | 2019-02-25 | 2019-06-11 | 燕山大学 | A kind of low-voltage platform area line loss calculation method and system based on deep learning |
CN110033128A (en) * | 2019-03-18 | 2019-07-19 | 西安科技大学 | Drag conveyor loaded self-adaptive prediction technique based on limited Boltzmann machine |
CN110033128B (en) * | 2019-03-18 | 2023-01-31 | 西安科技大学 | Self-adaptive prediction method for scraper conveyor load based on limited Boltzmann machine |
CN109949180A (en) * | 2019-03-19 | 2019-06-28 | 山东交通学院 | A kind of the cool and thermal power load forecasting method and system of ship cooling heating and power generation system |
CN110119837B (en) * | 2019-04-15 | 2023-01-03 | 天津大学 | Space load prediction method based on urban land property and development time |
CN110119837A (en) * | 2019-04-15 | 2019-08-13 | 天津大学 | A kind of Spatial Load Forecasting method based on urban land property and development time |
CN110084413A (en) * | 2019-04-17 | 2019-08-02 | 南京航空航天大学 | Safety of civil aviation risk index prediction technique based on PCA Yu depth confidence network |
CN110263995A (en) * | 2019-06-18 | 2019-09-20 | 广西电网有限责任公司电力科学研究院 | Consider the distribution transforming heavy-overload prediction technique of load growth rate and user power utilization characteristic |
CN110263995B (en) * | 2019-06-18 | 2022-03-22 | 广西电网有限责任公司电力科学研究院 | Distribution transformer overload prediction method considering load increase rate and user power utilization characteristics |
CN110543656A (en) * | 2019-07-12 | 2019-12-06 | 华南理工大学 | LED fluorescent powder glue coating thickness prediction method based on deep learning |
CN110782074A (en) * | 2019-10-09 | 2020-02-11 | 深圳供电局有限公司 | Method for predicting user power monthly load based on deep learning |
CN110852522B (en) * | 2019-11-19 | 2024-03-29 | 南京工程学院 | Short-term power load prediction method and system |
CN110852522A (en) * | 2019-11-19 | 2020-02-28 | 南京工程学院 | Short-term power load prediction method and system |
CN111667090A (en) * | 2020-03-25 | 2020-09-15 | 国网天津市电力公司 | Load prediction method based on deep belief network and weight sharing |
CN112232547A (en) * | 2020-09-09 | 2021-01-15 | 国网浙江省电力有限公司营销服务中心 | Special transformer user short-term load prediction method based on deep belief neural network |
CN112232547B (en) * | 2020-09-09 | 2023-12-12 | 国网浙江省电力有限公司营销服务中心 | Special transformer user short-term load prediction method based on deep confidence neural network |
CN112465664A (en) * | 2020-11-12 | 2021-03-09 | 贵州电网有限责任公司 | AVC intelligent control method based on artificial neural network and deep reinforcement learning |
CN112465664B (en) * | 2020-11-12 | 2022-05-03 | 贵州电网有限责任公司 | AVC intelligent control method based on artificial neural network and deep reinforcement learning |
CN112381297A (en) * | 2020-11-16 | 2021-02-19 | 国家电网公司华中分部 | Method for predicting medium-term and long-term electricity consumption in region based on social information calculation |
CN112308342A (en) * | 2020-11-25 | 2021-02-02 | 广西电网有限责任公司北海供电局 | Daily load prediction method based on deep time decoupling and application |
CN113011645A (en) * | 2021-03-15 | 2021-06-22 | 国网河南省电力公司电力科学研究院 | Power grid strong wind disaster early warning method and device based on deep learning |
CN113177355A (en) * | 2021-04-28 | 2021-07-27 | 南方电网科学研究院有限责任公司 | Power load prediction method |
CN113177355B (en) * | 2021-04-28 | 2024-01-12 | 南方电网科学研究院有限责任公司 | Power load prediction method |
CN116365519A (en) * | 2023-06-01 | 2023-06-30 | 国网山东省电力公司微山县供电公司 | Power load prediction method, system, storage medium and equipment |
CN116365519B (en) * | 2023-06-01 | 2023-09-26 | 国网山东省电力公司微山县供电公司 | Power load prediction method, system, storage medium and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106709820A (en) | Electrical power system load prediction method and device based on depth belief network | |
CN110414788A (en) | A kind of power quality prediction technique based on similar day and improvement LSTM | |
CN105631558A (en) | BP neural network photovoltaic power generation system power prediction method based on similar day | |
CN109754113A (en) | Load forecasting method based on dynamic time warping Yu length time memory | |
CN109902874A (en) | A kind of micro-capacitance sensor photovoltaic power generation short term prediction method based on deep learning | |
CN109063911A (en) | A kind of Load aggregation body regrouping prediction method based on gating cycle unit networks | |
CN104077632B (en) | A kind of wind electric field power prediction method based on deep neural network | |
CN108898251A (en) | Consider the marine wind electric field power forecasting method of meteorological similitude and power swing | |
CN107730031A (en) | A kind of ultra-short term peak load forecasting method and its system | |
CN109583565A (en) | Forecasting Flood method based on the long memory network in short-term of attention model | |
CN108549960A (en) | A kind of 24 hours Methods of electric load forecasting | |
CN107563122A (en) | The method of crime prediction of Recognition with Recurrent Neural Network is locally connected based on interleaving time sequence | |
CN105069521A (en) | Photovoltaic power plant output power prediction method based on weighted FCM clustering algorithm | |
CN109146063B (en) | Multi-segment short-term load prediction method based on important point segmentation | |
CN106447133A (en) | Short-term electric load prediction method based on deep self-encoding network | |
Wang et al. | A regional pretraining-classification-selection forecasting system for wind power point forecasting and interval forecasting | |
CN110276472A (en) | A kind of offshore wind farm power ultra-short term prediction method based on LSTM deep learning network | |
CN112329990A (en) | User power load prediction method based on LSTM-BP neural network | |
CN112232561A (en) | Power load probability prediction method based on constrained parallel LSTM quantile regression | |
CN109214565A (en) | A kind of subregion system loading prediction technique suitable for the scheduling of bulk power grid subregion | |
CN114118596A (en) | Photovoltaic power generation capacity prediction method and device | |
CN107256436A (en) | The prediction and matching and control method of dissolving of thermal storage electric boiler and clean energy resource | |
CN110070228A (en) | BP neural network wind speed prediction method for neuron branch evolution | |
CN108964023A (en) | A kind of busbar voltage situation short term prediction method and system for power grid | |
CN110135634A (en) | Long-medium term power load forecasting device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170524 |
|
RJ01 | Rejection of invention patent application after publication |