CN106709820A - Electrical power system load prediction method and device based on depth belief network - Google Patents

Electrical power system load prediction method and device based on depth belief network Download PDF

Info

Publication number
CN106709820A
CN106709820A CN201710021315.8A CN201710021315A CN106709820A CN 106709820 A CN106709820 A CN 106709820A CN 201710021315 A CN201710021315 A CN 201710021315A CN 106709820 A CN106709820 A CN 106709820A
Authority
CN
China
Prior art keywords
layer
training
load
visible
training sample
Prior art date
Application number
CN201710021315.8A
Other languages
Chinese (zh)
Inventor
吴争荣
董旭柱
陆锋
刘志文
陶文伟
谢雄威
陈立明
何锡祺
俞小勇
陈根军
禤亮
苏颜
李瑾
陶凯
Original Assignee
中国南方电网有限责任公司电网技术研究中心
南方电网科学研究院有限责任公司
南京南瑞继保电气有限公司
中国南方电网有限责任公司
广西电网有限责任公司电力科学研究院
广西电网有限责任公司南宁供电局
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国南方电网有限责任公司电网技术研究中心, 南方电网科学研究院有限责任公司, 南京南瑞继保电气有限公司, 中国南方电网有限责任公司, 广西电网有限责任公司电力科学研究院, 广西电网有限责任公司南宁供电局 filed Critical 中国南方电网有限责任公司电网技术研究中心
Priority to CN201710021315.8A priority Critical patent/CN106709820A/en
Publication of CN106709820A publication Critical patent/CN106709820A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation, e.g. linear programming, "travelling salesman problem" or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Abstract

The embodiment of the invention provides an electrical power system load prediction method and device based on a depth belief network (DBN) and relates to the field of electric power systems. Through the method and device, the convergence rate can be increased, and the prediction error can be lowered. According to the specific scheme, the method comprises the steps that a training sample and a test sample are acquired; an energy function of an RBM model is constructed; the training sample is utilized to perform layer-by-layer training on at least one hidden layer and visible layer to obtain weights of the training sample among nodes of the hidden layers and the visible layers; and a prediction value of an electrical power system load is obtained according to output data obtained through the training sample and the DBN obtained after test sample input is trained. The method and device are used for electrical power system load prediction.

Description

A kind of Load Prediction In Power Systems method and device based on depth confidence network
Technical field
Embodiments of the invention are related to field of power, more particularly to a kind of power system based on depth confidence network Load forecasting method and device.
Background technology
Load Prediction In Power Systems are the important components of Power System Planning, are also the base of Economical Operation of Power Systems Plinth, it takes into full account the influence of the correlative factors such as politics, economy, weather, predicts following use from known need for electricity Electric demand.Accurate load prediction data contributes to dispatching of power netwoks control and safe operation, formulates rational power construction planning And improve the economic benefit and social benefit of power system.In load prediction, frequently with regression model, time series forecasting The method such as technology and Grey Theory Forecast technology.In recent years, with the rise that artificial neural network is studied, entered using neutral net Row Load Prediction In Power Systems, make predicated error have substantial degradation, and cause the great attention of load prediction worker.
The load prediction of power system is carried out using traditional neural network in the prior art.However, traditional neural network It is a kind of typical global Approximation Network, one or more weights of network have an impact to each output;On the other hand, network After training every time it is determined that there is randomness during weights, is caused input, outlet chamber relation it is indefinite, predict the outcome and have differences. It can be seen that the scheme of prior art has that convergence rate is slow and predicated error is big.
The content of the invention
Embodiments of the invention provide a kind of Load Prediction In Power Systems method and device based on depth confidence network, energy Convergence rate is enough improved, predicated error is reduced.
In order to reach above-mentioned purpose, embodiments herein is adopted the following technical scheme that:
A kind of first aspect, there is provided Load Prediction In Power Systems method based on DBN, DBN is limited Boltzmann by multilayer Machine RBM is constituted, including at least one hidden layer and a visible layer, and the Load Prediction In Power Systems method includes:
Obtain training sample and test sample;Construct the energy function of RBM models;
At least one hidden layer and visible layer are successively trained using the training sample, the training sample is obtained in institute State the weights between at least one hidden layer and visible node layer;
The output data that will be obtained by the training sample, and test sample input is obtained by the DBN after training To the predicted value to power system load.
A kind of second aspect, there is provided Load Prediction In Power Systems device based on DBN, is provided for performing first aspect Method.
The Load Prediction In Power Systems method and device based on depth confidence network that embodiments of the invention are provided, profit Depth confidence network, including some hidden layers and a visible layer are constituted with the limited Boltzmann machine of multilayer, can be by using layer Secondary unsupervised greedy pre-training method layering pre-training RBM, and the result that will be obtained trains probabilistic model as supervised learning Initial value, so as to greatly improve learning performance, improves convergence rate, reduces predicated error.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, embodiment will be described below Needed for the accompanying drawing to be used be briefly described, it should be apparent that, drawings in the following description are only more of the invention Embodiment, for those of ordinary skill in the art, on the premise of not paying creative work, can also be attached according to these Figure obtains other accompanying drawings.
The Load Prediction In Power Systems method flow based on depth confidence network that Fig. 1 is provided by embodiments of the invention Schematic diagram;
Fig. 2 includes an explanation schematic diagram when visible layer and a hidden layer for RBM;
Fig. 3 illustrates for the explanation in embodiments of the invention to the optimal error recognition rate under three kinds of model different structures Figure;
The curve map changed with iterations to average absolute percent error in Fig. 4 embodiments of the invention;
The structural representation of the Load Prediction In Power Systems device based on DBN that Fig. 5 embodiments of the invention are provided.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
Load prediction convergence rate for traditional neural network is slow, the big problem of predicated error, set forth herein a kind of base In depth confidence network (English full name:Deep Belief Network, English abbreviation:DBN Load Prediction In Power Systems side) Case, with reference to shown in Fig. 1, the method is comprised the following steps:
101st, training sample and test sample are obtained.
102nd, the energy function of RBM models is constructed.
103rd, at least one hidden layer and visible layer are successively trained using training sample, training sample is obtained hidden at least one Weights between layer and visible node layer.
104th, the output data that will be obtained by training sample, and test sample is input into by the DBN after training, obtains right The predicted value of power system load.
In specific embodiment of the invention, DBN is by limited Boltzmann machine (the English full name of multilayer:Restricted Boltzmann Machine, RBM) constitute, including at least one hidden layer and a visible layer, by unsupervised greedy using level Greedy pre-training method layering pre-training RBM, and the result that will be obtained trains the initial value of probabilistic model as supervised learning, so that Greatly improve learning performance, be described as follows:
First, data prediction
S1, training sample is pre-processed:
Assuming that all training samples are X={ X1,X2,...,XM, wherein XiOne group of load data is represented, M represents load number According to number, for input sample, algorithm is normalized using 4 steps:
1. average is calculated:
Wherein, u represents the average of sample.
2. variance is calculated:
Wherein, δ represents the variance of sample.
3. albefaction:
Wherein, Xi' represent sample data whitening parameters.
4. normalize:
Wherein, Xi,nRepresent the normalized parameter of sample data;
S2, test sample is pre-processed:
The pretreatment of test sample needs average and variance albefaction according to training sample by it, then according to training sample Maximum, minimum value uniformly normalized between 0 to 1, it is assumed that single test sample is T, then normalization step is:
1. albefaction:
Wherein, T' represents the whitening parameters of test sample data, and T is single test sample, and u represents the equal of training sample Value, δ represents the variance of training sample.
2. normalize:
Wherein, TnRepresent the normalized parameter of test sample data.
Herein, the reason for albefaction because there is larger correlation between natural data adjacent element, therefore by albefaction The redundancy of data can be reduced, similar to principal component analysis (English full name:Principal Component Analysis, Claim between English:PCA) dimensionality reduction.
2nd, the general principle and composition of depth confidence network
DBN is a kind of energy model, and its feature learning ability is very powerful, is a kind of depth just studied very early Network.Its learning method is proposed by Hinton et al., is substantially by building the machine learning model with many hidden layers Learn more useful feature with the training data of magnanimity, so that the finally accuracy of lifting classification or prediction.
DBN can regard the Complex Neural Network being made up of multilayer RBM as, and it includes a visible layer and some hidden layers, adopts The pre- weights of RBM are trained with the method to sdpecific dispersion.Deep neural network is pre- with the unsupervised greedy pre-training method layering of level Training RBM, the result that will be obtained trains the initial value of probabilistic model, learning performance to be greatly improved as supervised learning.
RBM is the stochastic neural net of production, for learning the probability distribution on input data, can be as needed Trained using in supervision and non-supervisory two ways, it has extensive utilization at many aspects.RBM as shown in Figure 2 has two layers of mesh Network is expressed as:Vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element;Vector v=(v1,v2,…,vn) table Show visible layer, have n element.It is mutual between interior each element of layer because being connectionless between layer inner element Independent, that is, have:
P (h | v)=p (h1|v)p(h2|v)…p(hm|v);
P (v | h)=p (v1|h)p(v2|h)…p(vn|h)。
The distribution of general visible layer and Hidden unit meets Bernoulli Jacob's form, then have:
It follows that by each layer of probability distribution can to obtain the probability distribution of whole model, such that it is able to by Layer carries out unsupervised training.Using the output of lower floor RBM as upper strata RBM input.To the last one layer of hidden layer, incites somebody to action last One layer of hidden layer is input to softmax graders, then calculates evaluated error and is reversely finely tuned.
The energy function of S3, construction RBM models:
The energy function of the RBM models for meeting Bernoulli Jacob's distribution of hidden layer and visible layer is defined as:
Wherein, aiAnd bjRepresent bias term, wijRepresent the connection weight between visible element and Hidden unit.θ=(a, b, W) be RBM models weighting parameter, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1, v2,…,vn) visible layer is represented, have n element.
S4, calculating visible layer, the joint probability distribution of Hidden unit:
Setting models parameter, it is seen that, the joint probability distribution of Hidden unit can be expressed as:
Wherein,It is a normalization factor ,-E (v, h) is the energy of the RBM models of hidden layer and visible layer Flow function, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element.
Can be derived according to formula (7) and formula (8):
In formula,
S5, the edge distribution for calculating visible layer and hidden layer:
According to visible layer and joint probability distribution P (v, h) of hidden layer, edge distribution can be obtained:
Wherein,It is a normalization factor, E (v, h) is the energy of the RBM models of hidden layer and visible layer Flow function, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element.
S6, construction log-likelihood function are:
Wherein, l is the number of training sample, and θ=(a, b, w) is the weighting parameter of RBM models, P (vi) be visible layer with The edge distribution of hidden layer.
S7, the gradient for calculating log-likelihood function:
Derivation is carried out to log-likelihood function according to gradient descent method, can be obtained:
Arrangement can be obtained:
Wherein, lnL (θ) is log-likelihood function, and l is the number of training sample, and θ=(a, b, w) is the weights of RBM models Parameter, and P (h | vi) be visible layer and hidden layer edge distribution, E (v, h) is the energy function of the RBM models of hidden layer and visible layer, P(h,vi) it is visible layer, the joint probability distribution of Hidden unit.
3rd, using the flow of depth confidence neural network forecast power system load
For deep learning, its thought is exactly to the multiple layers of stacking, that is to say, that the output of this layer is used as next The input of layer.Than if any a system S, it has n-layer (S1 ... Sn), its input is I, and output is O, is represented by:I=>S1 =>S2=>... ..=>Sn=>O, by adjusting system parameters so that output O is still input I, then just can be automatic Acquire input I a series of level characteristics, i.e. S1 ..., Sn.In this way, it is possible to achieve input information is entered Row hierarchical table reaches.It is exactly a mistake for deep learning on process nature using depth confidence neural network forecast power system load Journey.
Flow using depth confidence neural network forecast power system load is:Using the method training RBM's to sdpecific dispersion Pre- weights.By analysis, algorithm uses three layers of hidden layer, and uses the unsupervised greedy pre-training method layering pre-training of level RBM, the result that will be obtained trains the initial value of probabilistic model as supervised learning, so as to improve learning performance.Emulation data show Show, compared with traditional neural network algorithm, the algorithmic statement is fast, and predicated error is small.
1. the pre- weights of RBM are trained using the method to sdpecific dispersion.KL- divergences are non-symmetrical metrics, so KL (Q | | P) Value from KL (P | | Q) is different.The difference of two distributions is bigger, and the value of KL- divergences is also bigger.The log-likelihood of RBM models The maximum of function is finally evolved into two differences of the KL- divergences of probability distribution of calculating:
KL(Q||P)-KL(Pk| | P) (formula 15)
Wherein, Q is prior distribution, PkIt is the distribution after k step Gibbs samplings.If Gibbs chains reach plateau (due to Original state v(0)=v, has been plateau), then there is Pk=P, namely KL (Pk| | P)=0.So sdpecific dispersion algorithm is obtained Evaluated error be equal to 0.
2. the result that will be obtained trains the initial value of probabilistic model as supervised learning, and it is normal that relative entropy is also called KL- divergences For weighing two distances of probability distribution.The KL- divergences that two probability Q and P are distributed in state space are defined as follows:
KL (Q | | P) is the KL- divergences that two probability Q and P are distributed in state space.
3. the pre- weights and initial value that will be obtained are transferred to next layer as input data, are instructed again in next layer Practice.One layer network of training, successively trains every time.Specifically, first training ground floor with without nominal data, first is first learnt during training The parameter (this layer can be regarded as obtaining a hidden layer for causing to export and be input into the minimum three-layer neural network of difference) of layer, After study obtains (n-1)th layer, using n-1 layers of output as the input of n-th layer, n-th layer is trained, thus respectively obtain each layer Parameter.Therefore, training process is exactly a process for iteration.After having trained for all layers, wake-sleep algorithms are used Carry out tuning.
The wake stages:Cognitive process, produces each layer to take out by extraneous feature and upward weight (cognitive weight) As representing (node state), and decline the descending weight (generation weight) of modification interlayer using gradient.
The sleep stages:Generating process, is represented and downward weight, the state of generation bottom, while changing interlayer by top layer Upward weight.
4. after pre-training, DBN can be gone to differentiating that performance adjusts by using tape label data BP algorithm.At this In, a tally set will be affixed to top layer (popularization associative memory), by bottom-up, the identification weights for learning Obtain a classifying face for network.This performance can be better than the network that simple BP algorithm is trained.This can intuitively explain very much, The BP algorithm of DBNs is only needed to carry out weighting parameter space one search of part, and this is instructed compared to for feedforward neural network White silk wants fast, and the convergent time is also few.
5. after training and tuning step terminate, in highest two-layer (i.e. last layer hidden layer and visible layer), weights are connected It is connected to together, the output of such lower level will provide for the clue of reference or associate to top layer, such top layer will be by Its memory content for relating to it.So as to carry out the load value prediction of power system.
In a kind of specific embodiment, S8:Visible layer, hidden layer two probability Q and P are calculated according to formula 16 and is distributed in shape KL- divergences in state space;S9:The maximum of the log-likelihood function of RBM models is finally evolved into by calculating two according to formula 15 The difference of the KL- divergences of individual probability distribution Q and P.S10:S1-S9 is to carry out order training method to training sample by RBM models, is obtained Weights of the training sample between visible node layer and hidden node, weights are input to visible layer from hidden layer, it is seen that layer can remember this Weights content simultaneously draws the predicted load of training sample, for the prediction process of the load of power system.S11, test sample The average load value of the training sample obtained plus S10, peak load value, temperature on average, the prediction day of numerical weather forecast are put down Equal temperature as deep neural network algorithm input data, by the calculating of S1-S9, the data of output are prediction power train The load value of system.
4th, Setup Experiments
Eastern Utilities Electric Co. of Slovakia in January, 2007~October load number that Road test is provided using EUNITE contests According to training data is made, November~December data are used as prediction correction data.The input data of wherein neutral net includes:The previous day 48 sampled points (the average load data per half an hour), the mean temperature of the previous day, the previous day load peak, the previous day loads Valley, previous day average, prediction mean daily temperature.48 point datas that prediction algorithm disposably predicts second day.
In the application of load prediction, the load prediction data per half an hour, algorithm in 0 hour~24 hours are generally required Take and predict that 0 point~24 points of the previous day day has 48 historical datas altogether, add the equal load value of prediction day previous balance, peak load Value, temperature on average, numerical weather forecast prediction daily mean temperature as deep neural network algorithm input data.Output number According to the load data for being 0 point~24 points of prediction day per half an hour.
For comparison algorithm performance, BP neural network, Self-organized Fuzzy Neural Network (English full name have been selected:Self- Organizing Fuzzy Neural Network, English abbreviation:SOFNN) contrasted with depth confidence network.BP nerves Network has the characteristic of the good non-linear rule of seizure, and when the neuron of hidden layer is enough, 3 layers of perceptron model can be real Incumbent what nonlinear function is approached.The major advantage of SOFNN algorithms is that can automatically determine network structure and be given Model parameter, with good precision of prediction.
Judgment criteria is average absolute percent error (MAPE), is defined as follows:
Wherein, DiWithThe actual value and predicted value of respectively i-th day certain month 1997 peak load, n are certain moon in 1997 Number of days.
5th, performance evaluation
(1) influence of the network structure to algorithm performance
First, influence of the experimental study network structure (hidden layer is neuron number) to prediction effect, takes in January, 2007 ~October load data makees training data, and November, load data was used as test data.Network structure is implied using 1,2,3 layers respectively Layer.Neuron number uses 20,50,100,200,400 respectively.Herein, compare for convenience, every layer of neuron number is identical. For the setting of initial parameter, the selection of following parameters will manually select to obtain optimal identification rate from the range of these:BP Habit rate (0.1,0.05,0.02,0.01,0.005), pre-training learning rate (0.01,0.005,0.002,0.001).Fig. 3 shows Optimal error recognition rate under three kinds of model different structures.From the figure 3, it may be seen that when the implicit number of plies is less or neuron number is less When BP network performances not by pre-training it is more excellent, when only 1 layer of hidden layer, SOFNN when neuron number reaches 200 Error rate is just suitable with BP, and when the implicit number of plies is 2 layers and 3 layers, the property of SOFNN when neuron number reaches 100 and 50 To just can approach and more than BP.DBN similarly, it can be found that the performance of SOFNN and DBN phases in the case that neuron number is less Compare excellent, it is then opposite in the case that neuron number is more.And when the implicit number of plies is identical, because over-fitting problem.BP nets The performance of network is not, as node number is incremented by, to decline on the contrary when nodes are excessive, and depth model is then incremented by simultaneously always Tend towards stability, by the performance of the depth network of pre-training, with the expansion of network size, (including the implicit number of plies increases for this explanation Or nodes increase) gradually optimize.It can be seen that, the very few implicit number of plies and implicit nodes can reduce depth model Performance.Reason can so explain that the effect of pre-training model is to extract the core feature in input feature vector, due to openness bar The limitation of part, it is assumed that neuron number is very few, for the input of some input samples, only a small amount of neuron is activated, and These features cannot represent original input, therefore lost some information content, cause the decline of performance.Although network size The performance of bigger depth model is better, but simultaneously the training time also increase, it is therefore desirable to weighed with the training time in performance Weighing apparatus.
(2) Algorithm Convergence compares
Fig. 4 is average absolute percent error with iterations change curve.Algorithm uses three layers of hidden layer, and every layer 100 every 100 model of neuron of layer.Learning rate all takes 0.01 during pre-training.Learning rate takes 0.1 during fine setting, and momentum term coefficient takes 0.5. In January, 2007~November load data makees training data, and December, load data was used as test data.Compare under iteration 1000 times The average absolute percent error change of BP, SOFNN, DBN.
It can be seen that with the increase of iterations, the error rate of three kinds of models is all gradually reduced analysis chart 4, this be due to As the distribution that frequency of training increases network parameter becomes closer to minimum point.But BP networks are reaching certain training time Error rate occurs shaking and having gradually increased trend after number, and declines by the error rate stabilization of pre-training DBN networks, this Indirectly demonstrating pre-training can make the distributed areas of network initial parameter be more nearly minimum point, and be effectively prevented from Local concussion.
Load prediction is that power distribution network administrative decision and the method for operation provide important evidence.The present invention proposes to be put using depth The Load Forecast Algorithm of communication network, the existing neural network algorithm learning rate of improvement is slower, the low problem of forecasting efficiency.Emulation knot Fruit shows that, compared to traditional neural network algorithm, the load forecast effect of the algorithm based on depth confidence network is obvious.
Embodiments of the invention also provide a kind of Load Prediction In Power Systems device based on DBN, for performing above-mentioned reality The Load Prediction In Power Systems method described in example is applied, with reference to shown in Fig. 5, Load Prediction In Power Systems device includes:
Data processing unit 501, for obtaining training sample and test sample;Construct the energy function of RBM models;
Training unit 502, for successively training at least one hidden layer and visible layer using training sample, obtains training sample Weights between at least one hidden layer and visible node layer;
Predicting unit 503, for the output data that will be obtained by training sample, and test sample input is by after training DBN, obtain the predicted value to power system load.
Optionally, data processing unit 501, are additionally operable to by load data XiTraining sample X={ the X of composition1, X2,...,XMBe normalized;Wherein XiOne group of load data is represented, M represents the number of load data;It is additionally operable to test specimens This average according to training sample and variance albefaction, and according to the maximum of training sample, minimum value uniformly normalize to 0 to 1 it Between.
Optionally, it is seen that the distribution of layer and Hidden unit meets Bernoulli Jacob's form;
The energy letter of the RBM models for meeting Bernoulli Jacob's distribution of visible layer defined in data processing unit 501 and hidden layer Number is:
Wherein, aiAnd bjRepresent bias term, wijRepresent the connection weight between visible element and Hidden unit.θ=(a, b, W) be RBM models weighting parameter, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1, v2,…,vn) visible layer is represented, have n element.
Optionally, training unit 502, the pre- weights specifically for training RBM using the device to sdpecific dispersion, by what is obtained Result trains the initial value of probabilistic model as supervised learning;The pre- weights and initial value that will be obtained are transmitted as input data To next layer network, it is trained again in next layer network;Every time training one layer network, successively train at least one hidden layer and Visible layer.
Optionally, the output data for being obtained by training sample, average load value, peak load at least including training sample Value, temperature on average, the prediction daily mean temperature of numerical weather forecast.
The Load Prediction In Power Systems method and device based on depth confidence network that embodiments of the invention are provided, profit Depth confidence network, including some hidden layers and a visible layer are constituted with the limited Boltzmann machine of multilayer, can be by using layer Secondary unsupervised greedy pre-training method layering pre-training RBM, and the result that will be obtained trains probabilistic model as supervised learning Initial value, so as to greatly improve learning performance, improves convergence rate, reduces predicated error, improves existing neural network algorithm Practise speed slower, the low problem of forecasting efficiency.Simulation result shows, compared to traditional neural network algorithm, based on depth confidence The load forecast effect of the algorithm of network is obvious.
More than, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any to be familiar with Those skilled in the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all cover Within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.

Claims (10)

1. a kind of Load Prediction In Power Systems method based on depth confidence network DBN, it is characterised in that DBN is limited by multilayer Boltzmann machine RBM is constituted, including at least one hidden layer and a visible layer, and the Load Prediction In Power Systems method includes:
Obtain training sample and test sample;Construct the energy function of RBM models;
Successively train at least one hidden layer and visible layer using the training sample, obtain the training sample it is described extremely Few weights between a hidden layer and visible node layer;
The output data that will be obtained by the training sample, and test sample input obtains right by the DBN after training The predicted value of power system load.
2. Load Prediction In Power Systems method according to claim 1, it is characterised in that the energy of the construction RBM models Before flow function, also include:
To by load data XiTraining sample X={ the X of composition1,X2,...,XMBe normalized;Wherein XiRepresent one group of load Data, M represents the number of load data;
The average according to the training sample and variance albefaction by the test sample, and according to the training sample it is maximum, Minimum value is uniformly normalized between 0 to 1.
3. Load Prediction In Power Systems method according to claim 1, it is characterised in that visible layer and Hidden unit point Cloth meets Bernoulli Jacob's form;The energy function of the construction RBM models, including:
The energy function of the RBM models for meeting Bernoulli Jacob's distribution of visible layer and hidden layer is defined as:
E ( v , h ) = - Σ i = 1 n Σ j = 1 m v i w i j h j - Σ i = 1 n a i v i - Σ j = 1 m b j h j ;
Wherein, aiAnd bjRepresent bias term, wijThe connection weight between visible element and Hidden unit is represented, θ=(a, b, w) is The weighting parameter of RBM models, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1,v2,…, vn) visible layer is represented, have n element.
4. Load Prediction In Power Systems method according to claim 3, it is characterised in that described to utilize the training sample At least one hidden layer and visible layer are successively trained, the training sample is obtained at least one hidden layer and visible layer section Weights between point, including:
The pre- weights of RBM are trained using the method to sdpecific dispersion, the result that will be obtained trains probabilistic model as supervised learning Initial value;
The pre- weights and initial value that will be obtained are transferred to next layer network as input data, are carried out again in next layer network Training;
One layer network of training, successively trains at least one hidden layer and visible layer every time.
5. Load Prediction In Power Systems method according to claim 3, it is characterised in that
The output data obtained by the training sample, at least the average load value including the training sample, peak load value, The prediction daily mean temperature of temperature on average, numerical weather forecast.
6. a kind of Load Prediction In Power Systems device based on depth confidence network DBN, it is characterised in that DBN is limited by multilayer Boltzmann machine RBM is constituted, including at least one hidden layer and a visible layer, and the Load Prediction In Power Systems device includes:
Data processing unit, for obtaining training sample and test sample;Construct the energy function of RBM models;
Training unit, for successively training at least one hidden layer and visible layer using the training sample, obtains the instruction Practice weights of the sample between at least one hidden layer and visible node layer;
Predicting unit, for the output data that will be obtained by the training sample, and test sample input is by training DBN afterwards, obtains the predicted value to power system load.
7. Load Prediction In Power Systems device according to claim 6, it is characterised in that
The data processing unit, is additionally operable to by load data XiTraining sample X={ the X of composition1,X2,...,XMReturned One changes;Wherein XiOne group of load data is represented, M represents the number of load data;It is additionally operable to the test sample according to described The average of training sample and variance albefaction, and uniformly normalized between 0 to 1 according to the maximum of the training sample, minimum value.
8. Load Prediction In Power Systems device according to claim 6, it is characterised in that
The distribution of visible layer and Hidden unit meets Bernoulli Jacob's form;
The energy function of the RBM models for meeting Bernoulli Jacob's distribution of visible layer defined in the data processing unit and hidden layer For:
E ( v , h ) = - Σ i = 1 n Σ j = 1 m v i w i j h j - Σ i = 1 n a i v i - Σ j = 1 m b j h j ;
Wherein, aiAnd bjRepresent bias term, wijThe connection weight between visible element and Hidden unit is represented, θ=(a, b, w) is The weighting parameter of RBM models, vectorial h=(h1,h2,…hm) Hidden unit is represented, have m element, vector v=(v1,v2,…, vn) visible layer is represented, have n element.
9. Load Prediction In Power Systems device according to claim 8, it is characterised in that
The training unit, specifically for using the pre- weights for training to the device of sdpecific dispersion RBM, the result that will be obtained is used as prison Superintend and direct the initial value of learning training probabilistic model;The pre- weights and initial value that will be obtained are transferred to next layer of net as input data Network, is trained again in next layer network;One layer network of training, successively trains at least one hidden layer and visible every time Layer.
10. Load Prediction In Power Systems device according to claim 8, it is characterised in that
The output data obtained by the training sample, at least the average load value including the training sample, peak load value, The prediction daily mean temperature of temperature on average, numerical weather forecast.
CN201710021315.8A 2017-01-11 2017-01-11 Electrical power system load prediction method and device based on depth belief network CN106709820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710021315.8A CN106709820A (en) 2017-01-11 2017-01-11 Electrical power system load prediction method and device based on depth belief network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710021315.8A CN106709820A (en) 2017-01-11 2017-01-11 Electrical power system load prediction method and device based on depth belief network

Publications (1)

Publication Number Publication Date
CN106709820A true CN106709820A (en) 2017-05-24

Family

ID=58907274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710021315.8A CN106709820A (en) 2017-01-11 2017-01-11 Electrical power system load prediction method and device based on depth belief network

Country Status (1)

Country Link
CN (1) CN106709820A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862384A (en) * 2017-11-16 2018-03-30 国家电网公司 A kind of method for building up of distribution network load disaggregated model
CN107947156A (en) * 2017-11-24 2018-04-20 国网辽宁省电力有限公司 Based on the electric network fault critical clearing time method of discrimination for improving Softmax recurrence
CN108303624A (en) * 2018-01-31 2018-07-20 舒天才 A kind of method for detection of partial discharge of switch cabinet based on voice signal analysis
CN108646149A (en) * 2018-04-28 2018-10-12 国网江苏省电力有限公司苏州供电分公司 Fault electric arc recognition methods based on current characteristic extraction
CN109579896A (en) * 2018-11-27 2019-04-05 佛山科学技术学院 Underwater robot sensor fault diagnosis method and device based on deep learning
CN109655711A (en) * 2019-01-10 2019-04-19 国网福建省电力有限公司漳州供电公司 Power distribution network internal overvoltage kind identification method
CN109871622A (en) * 2019-02-25 2019-06-11 燕山大学 A kind of low-voltage platform area line loss calculation method and system based on deep learning
WO2019141040A1 (en) * 2018-01-22 2019-07-25 佛山科学技术学院 Short term electrical load predication method
CN110084413A (en) * 2019-04-17 2019-08-02 南京航空航天大学 Safety of civil aviation risk index prediction technique based on PCA Yu depth confidence network
CN110119826A (en) * 2018-02-06 2019-08-13 天津职业技术师范大学 A kind of power-system short-term load forecasting method based on deep learning

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862384A (en) * 2017-11-16 2018-03-30 国家电网公司 A kind of method for building up of distribution network load disaggregated model
CN107947156A (en) * 2017-11-24 2018-04-20 国网辽宁省电力有限公司 Based on the electric network fault critical clearing time method of discrimination for improving Softmax recurrence
CN107947156B (en) * 2017-11-24 2021-02-05 国网辽宁省电力有限公司 Power grid fault critical clearing time discrimination method based on improved Softmax regression
WO2019141040A1 (en) * 2018-01-22 2019-07-25 佛山科学技术学院 Short term electrical load predication method
CN108303624A (en) * 2018-01-31 2018-07-20 舒天才 A kind of method for detection of partial discharge of switch cabinet based on voice signal analysis
CN110119826A (en) * 2018-02-06 2019-08-13 天津职业技术师范大学 A kind of power-system short-term load forecasting method based on deep learning
CN108646149A (en) * 2018-04-28 2018-10-12 国网江苏省电力有限公司苏州供电分公司 Fault electric arc recognition methods based on current characteristic extraction
CN109579896A (en) * 2018-11-27 2019-04-05 佛山科学技术学院 Underwater robot sensor fault diagnosis method and device based on deep learning
CN109655711A (en) * 2019-01-10 2019-04-19 国网福建省电力有限公司漳州供电公司 Power distribution network internal overvoltage kind identification method
CN109871622A (en) * 2019-02-25 2019-06-11 燕山大学 A kind of low-voltage platform area line loss calculation method and system based on deep learning
CN110084413A (en) * 2019-04-17 2019-08-02 南京航空航天大学 Safety of civil aviation risk index prediction technique based on PCA Yu depth confidence network

Similar Documents

Publication Publication Date Title
Wang et al. Research and application of a hybrid forecasting framework based on multi-objective optimization for electrical power system
Chang et al. An improved neural network-based approach for short-term wind speed and power forecast
Wang et al. Research and application of a combined model based on multi-objective optimization for multi-step ahead wind speed forecasting
Bhardwaj et al. Estimation of solar radiation using a combination of Hidden Markov Model and generalized Fuzzy model
Wang et al. Deep belief network based k-means cluster approach for short-term wind power forecasting
CN103268366B (en) A kind of combination wind power forecasting method suitable for distributing wind power plant
Uzlu et al. Estimates of hydroelectric generation using neural networks with the artificial bee colony algorithm for Turkey
Hong et al. Hour-ahead wind power and speed forecasting using simultaneous perturbation stochastic approximation (SPSA) algorithm and neural network with fuzzy inputs
CN104794534B (en) A kind of power grid security Tendency Prediction method based on improvement deep learning model
Ghaderi et al. Deep forecast: Deep learning-based spatio-temporal forecasting
Shao et al. Traffic flow prediction with long short-term memory networks (LSTMs)
Khotanzad et al. Combination of artificial neural-network forecasters for prediction of natural gas consumption
Yu et al. Short-term solar flare prediction using a sequential supervised learning method
CN101383023B (en) Neural network short-term electric load prediction based on sample dynamic organization and temperature compensation
Chang et al. Intelligent control for modeling of real‐time reservoir operation, part II: artificial neural network with operating rule curves
CN102722759B (en) Method for predicting power supply reliability of power grid based on BP neural network
VanDeventer et al. Short-term PV power forecasting using hybrid GASVM technique
CN104700321B (en) A kind of power transmission and transformation equipment state operation trend analysis method
CN105205563B (en) Short-term load predication platform based on large data
CN102622418B (en) Prediction device and equipment based on BP (Back Propagation) nerve network
CN102270309B (en) Short-term electric load prediction method based on ensemble learning
CN107239859A (en) The heating load forecasting method of Recognition with Recurrent Neural Network is remembered based on series connection shot and long term
Chen et al. Fuzzy forecasting based on fuzzy-trend logical relationship groups
Huang et al. One-day-ahead hourly forecasting for photovoltaic power generation using an intelligent method with weather-based forecasting models
CN106295899B (en) Wind power probability density Forecasting Methodology based on genetic algorithm Yu supporting vector quantile estimate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170524

RJ01 Rejection of invention patent application after publication