CN110580543A - Power load prediction method and system based on deep belief network - Google Patents
Power load prediction method and system based on deep belief network Download PDFInfo
- Publication number
- CN110580543A CN110580543A CN201910722953.1A CN201910722953A CN110580543A CN 110580543 A CN110580543 A CN 110580543A CN 201910722953 A CN201910722953 A CN 201910722953A CN 110580543 A CN110580543 A CN 110580543A
- Authority
- CN
- China
- Prior art keywords
- bernoulli
- layer
- deep belief
- power load
- boltzmann machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000013528 artificial neural network Methods 0.000 claims abstract description 63
- 238000012549 training Methods 0.000 claims abstract description 61
- 238000012417 linear regression Methods 0.000 claims abstract description 18
- 239000002131 composite material Substances 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims description 33
- 239000013598 vector Substances 0.000 claims description 26
- 238000010606 normalization Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 10
- 230000002776 aggregation Effects 0.000 claims description 8
- 238000004220 aggregation Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 description 17
- 210000002569 neuron Anatomy 0.000 description 12
- 238000009826 distribution Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000010248 power generation Methods 0.000 description 8
- 238000005070 sampling Methods 0.000 description 8
- 230000005611 electricity Effects 0.000 description 7
- 238000007726 management method Methods 0.000 description 7
- 230000004913 activation Effects 0.000 description 5
- 210000004027 cell Anatomy 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000001556 precipitation Methods 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000011664 nicotinic acid Substances 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013530 stochastic neural network Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Strategic Management (AREA)
- Artificial Intelligence (AREA)
- Human Resources & Organizations (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a power load prediction method based on a deep belief network, which adopts a sparse self-coding neural network to aggregate historical data of a power load; constructing a composite optimized deep belief network prediction model based on a restricted Boltzmann machine; the deep belief network prediction model comprises the following components in sequence from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine are pre-trained by adopting an unsupervised training method, and then parameter fine tuning is carried out by adopting a BP algorithm with an impulse term; and inputting the aggregated historical data into the deep belief network prediction model for prediction. The invention also discloses a power load prediction system based on the deep belief network. The method can better mine the regularity of the historical load data, thereby improving the prediction efficiency, and can fully consider the influence of different factors and improve the prediction precision.
Description
Technical Field
The invention relates to a power load prediction method and a power load prediction system, in particular to a power load prediction method and a power load prediction system based on a deep belief network.
Background
at present, for management and scheduling of an electric power system, short-term load prediction can be the most important, and can provide data support for a power generation plan to determine the power generation plan which best meets economic requirements, safety requirements, environmental natural requirements and equipment limitation requirements, so as to ensure economic and safe operation of the electric power system. At present, with the continuous development and perfection of a power system, higher requirements are put forward on the short-term load prediction of a power grid. Although a mature traditional method exists, the traditional method generally has the problem of low prediction accuracy, and the predicted result has a certain reference value but far reaches the expected level of an electric power enterprise.
With the continuous progress of modern scientific theory research, a new batch of emerging power load prediction means, such as neural network theory, fuzzy mathematics, support vector machines and the like, appear, which are the progress and development of power load prediction and further improve the prediction precision. However, in the prior art, a single intelligent prediction method is difficult to deal with the challenges brought by multi-dimensional data to prediction accuracy and efficiency, and a prediction method based on a deep belief network integrates a plurality of data processing methods and intelligent algorithms to obtain better prediction accuracy and efficiency.
Disclosure of Invention
the invention provides a power load prediction method and system based on a deep belief network for solving the technical problems in the prior art.
The technical scheme adopted by the invention for solving the technical problems in the prior art is as follows: a power load prediction method based on a deep belief network adopts a sparse self-coding neural network to aggregate historical data of a power load; constructing a composite optimized deep belief network prediction model based on a restricted Boltzmann machine; the deep belief network prediction model comprises the following components in sequence from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine are pre-trained by adopting an unsupervised training method, and then parameter fine tuning is carried out by adopting a BP algorithm with an impulse term; and inputting the aggregated historical data into the deep belief network prediction model for prediction.
Furthermore, factors of date, weather and demand side management information are comprehensively considered during historical data collection of the power load, and each data type is divided to form input feature vectors.
further, before the sparse self-coding neural network is adopted to aggregate the data, the historical data of the power load is subjected to normalization preprocessing.
Further, the sparse self-coding neural network adopts a two-layer or three-layer neural network; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
furthermore, the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine are both constructed by two layers, the two layers comprise a visible layer and a hidden layer, the units between the two layers are connected with each other, and the units on the same layer are not connected with each other.
The invention also provides a power load prediction system based on the deep belief network, which comprises a data preprocessing unit and a deep belief network prediction model unit; wherein: the data preprocessing unit comprises a sparse self-coding neural network subunit, and the sparse self-coding neural network subunit inputs historical data of the power load and performs aggregation processing; the deep belief network prediction model unit sequentially comprises the following components from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine, the Bernoulli-Bernoulli limited Boltzmann machine and the linear regression output layer are pre-trained by adopting an unsupervised training mode, then parameter fine tuning is carried out by adopting a BP algorithm added with a momentum term, historical data processed by a data preprocessing unit are input, and a predicted value of the power load is output.
Further, the data preprocessing unit further comprises a normalization preprocessing subunit; and the normalization preprocessing subunit performs normalization preprocessing on the historical data of the power load, and the processed data is output to the sparse self-coding neural network subunit.
further, the sparse self-coding neural network subunit comprises two or three layers of neural networks; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
Further, the gauss-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine both comprise two layers, namely a hidden layer and a visible layer, the units between the two layers are connected with each other, and the units on the same layer are not connected with each other.
Further, when the deep belief network prediction model unit is trained, the gaussian-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine are respectively stacked with a temporary output layer, then are pre-trained by adopting an unsupervised training mode respectively, and are subjected to parameter fine adjustment by adopting a Back Propagation (BP) algorithm added with an impulse term.
the invention has the advantages and positive effects that: the method fully excavates regularity in historical load data, inputs data feature vectors into a sparse self-coding neural network for feature fusion, adopts a DBN (dynamic bandwidth network) model for load prediction, performs unsupervised training for model pre-training, and finally fine-tunes to a final prediction result through a BP (back propagation) algorithm. The method can better mine the regularity of the historical load data, thereby improving the prediction efficiency, and can fully consider the influence of different factors and improve the prediction precision.
the invention provides a power load prediction method and system based on a deep belief network, which improve the problem of adoption of the existing neural network algorithm learning on historical data and improve the learning efficiency. Simulation results show that compared with the traditional neural network algorithm, the prediction accuracy of the method is improved.
Compared with the existing prediction methods and systems, the average value of prediction errors of the four-season power load is 3.59% MAPE, which is smaller than that of the other three methods. In consideration of the influences of temperature, illumination intensity and electricity price of use time, the method and the system for predicting the power load based on the deep belief network can more fully adopt the complex relation between various influencing factors and the power load.
Drawings
FIG. 1 is a flow chart of a deep belief network-based power load prediction method of the present invention;
FIG. 2 is a schematic diagram illustrating the structure of a sparse self-encoding neural network according to the present invention;
FIG. 3 is a schematic diagram of the Gibbs sampling method of the present invention;
FIG. 4 is a flow chart of a deep belief network prediction model of the present invention;
FIG.5 is a graph of comparative test results of a deep belief network prediction model of the present invention and a DBN model using two BB-RBMs;
FIG. 6 is a graph comparing a deep belief network-based power load prediction method of the present invention with a BP neural network prediction method, an SVM prediction method, and a conventional DBN method;
Fig. 7 is a histogram comparing graph of prediction results of a power load prediction method based on a deep belief network, a BP neural network prediction method, an SVM prediction method and a conventional DBN prediction method under the condition of renewable energy grid connection.
Detailed Description
For further understanding of the contents, features and effects of the present invention, the following embodiments are enumerated in conjunction with the accompanying drawings, and the following detailed description is given:
The English abbreviation Chinese meaning in this application is as follows:
DBN: a deep belief network prediction model;
RBM: a constrained boltzmann machine;
GB-RBM: a gaussian-bernoulli limited boltzmann machine;
BB-RBM: bernoulli-bernoulli limited boltzmann machine;
AE: a self-encoding neural network;
SAE: a sparse self-encoding neural network;
SVM: a support vector machine;
BP: a back propagation algorithm;
S-BP: a standard error back propagation algorithm based on gradient descent;
I-BP: adding an error back propagation algorithm of impulse terms on a weight updating rule;
Gibbs: gibbs sampling;
MAPE: mean absolute percentage error.
Referring to fig. 1 to 7, a power load prediction method based on a deep belief network adopts a sparse self-coding neural network to aggregate historical data of a power load; constructing a composite optimized deep belief network prediction model based on a restricted Boltzmann machine; the deep belief network prediction model comprises the following components in sequence from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the output of the Gaussian-Bernoulli limited Boltzmann machine is used as the input of the Bernoulli-Bernoulli limited Boltzmann machine, the output of the Bernoulli-Bernoulli limited Boltzmann machine is used as the input of the linear regression output layer, and the input data sequentially pass through the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine and are output by the linear regression output layer. Firstly, pre-training by adopting an unsupervised training method, and then carrying out parameter fine adjustment by adopting a BP algorithm added with impulse terms, namely, pre-training the deep belief network prediction model by adopting the unsupervised training method, and then carrying out parameter fine adjustment on the deep belief network prediction model by adopting the BP algorithm added with the impulse terms; and finally, inputting the aggregated historical data into the deep belief network prediction model for prediction to obtain a predicted value of the load of the power system.
When an unsupervised training method is adopted for pre-training, Gibbs sampling processing is carried out on training data, and a contrast divergence CD-k algorithm is adopted to accelerate the training process of the deep belief network prediction model. During training, the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine can be pre-trained by adopting an unsupervised training method respectively, during training, Gibbs sampling processing can be performed on training data respectively, and a contrast divergence CD-k algorithm is adopted to accelerate the training process of the Gaussian-Bernoulli limited Boltzmann machine or the Bernoulli-Bernoulli limited Boltzmann machine.
Factors such as date, weather, demand side management information and the like can be comprehensively considered during historical data acquisition of the power load, and the weather factors can include temperature, precipitation, wind speed and solar radiation; the day type factors can include holidays and workdays, and the demand side management information factors can include electric quantity data, time-of-use electricity price and the like. Each data type is divided in detail to form input feature vectors.
Before the sparse self-coding neural network is adopted to aggregate data, normalization preprocessing can be performed on historical data of the power load.
The sparse self-coding neural network can adopt two layers or three layers of neural networks; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
the gauss-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine can be constructed by two layers, the two layers can comprise a visible layer and a hidden layer, units between the two layers can be connected with each other, and no connection exists between every two units on the same layer.
The invention also provides an embodiment of the power load prediction system based on the deep belief network, which comprises a data preprocessing unit and a deep belief network prediction model unit; wherein: the data preprocessing unit comprises a sparse self-coding neural network subunit, and the sparse self-coding neural network subunit inputs historical data of the power load and performs aggregation processing; the deep belief network prediction model unit sequentially comprises the following components from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the output of the Gaussian-Bernoulli limited Boltzmann machine is used as the input of the Bernoulli-Bernoulli limited Boltzmann machine, the output of the Bernoulli-Bernoulli limited Boltzmann machine is used as the input of the linear regression output layer, and the input data sequentially pass through the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine and are output by the linear regression output layer. The method comprises the steps of pre-training in an unsupervised training mode, fine-tuning parameters by a BP algorithm with a surge item, inputting historical data processed by the data preprocessing unit, and outputting a predicted value of the power load. Firstly, pre-training the deep belief network prediction model by adopting an unsupervised training method, and then carrying out parameter fine adjustment on the deep belief network prediction model by adopting a BP algorithm added with impulse terms; and finally, inputting the aggregated historical data into the deep belief network prediction model for prediction to obtain a predicted value of the load of the power system.
further, the data preprocessing unit may further include a normalization preprocessing subunit; and the normalization preprocessing subunit performs normalization preprocessing on the historical data of the power load, and the processed data is output to the sparse self-coding neural network subunit.
Further, the sparse self-encoding neural network subunit may comprise a two-layer or three-layer neural network; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
Further, the gaussian-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine may each include two layers, which may be a hidden layer and a visible layer, respectively, and units between the two layers may be connected to each other, and there may be no connection between every two units on the same layer.
Further, when the deep belief network prediction model unit is trained, the gaussian-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine can be respectively stacked with a temporary output layer, then the temporary output layers can be pre-trained by adopting an unsupervised training mode, and then parameter fine tuning can be carried out by adopting a BP algorithm with an impulse term.
The working principle of the invention is explained below in connection with a preferred embodiment of the invention:
Load data selection and data aggregation using SAE
in order to enable the prediction result to be more accurate, various factors influencing the load, including date attributes, weather data, demand side management information such as peak-valley time-of-use electricity price and the like, are considered, and each data type is divided in detail, so that an input feature vector of the power load prediction model is formed. Then, the feature vectors are put into an auto-encoder for feature aggregation, so as to realize effective aggregation of nonlinear data.
s1 selection of load data
the data includes four main sets of measured variables: weather data (temperature, precipitation, wind speed and solar radiation), day type data, electricity data and time of use electricity prices.
selecting data
Non-holidays: historical data of non-holidays a plurality of days before the day to be predicted can be selected as a training sample set. Festival and holiday: the data of similar days to be predicted can be selected using the usual grey correlation projection method (Algorithm 1).
calculating Y0jAnd YijDegree of association between:
Where λ is the resolution factor.
Calculating the weight occupied by each influence factor:
② data preprocessing
The data normalization aims to change the dimensional data into dimensionless data and map the data to the range of 0-1 (fig.5), so that the convergence rate of the prediction model can be improved.
wherein, XiAs the sample data, the data is,is XiNormalized value of (A), XmaxAnd XminAre each XiMaximum and minimum values of;
s2, aggregating data by SAE
The self-coding neural network (AE) is a three-layer unsupervised neural network, and can adopt the input vector to form a high-level concept in the next layer through nonlinear mapping. AE attempts to approximate the same function, thus bringing the target output value close to the input value to minimize the expected reconstruction error. Fig. 2 is a basic architecture of AE. The first layer is the input layer, the middle layer is the hidden layer, and the last layer is the output layer. The AE network can perform a nonlinear conversion from the previous layer to the next layer by activating the function.
The learning process of the AE network includes two phases: encoder and decoder stages.
The encoder converts the input into a more abstract feature vector from which the decoder reconstructs the input. The encoder is a forward propagation process. The encoder needs to train x(1),x(2),...,x(i)},x(i)∈Znnonlinear mapping to hidden layer is performed by sigmoid function f (z) as follows:
The decoding process is to reconstruct the input layer in the output. Reconstructed vector y(1),y(2),...,y(i)Can be given by:
Where W is the weight vector between these different layers and b is the offset. { W, b } are trainable parameters in the encoder and decoder.
In a network, a node is considered "inactive" if its output is zero or near zero in the hidden layer, and is considered "active" when its output is 1 or near 1. To make the hidden layer sparse, we need to make most nodes inactive in this hidden layer. Thus, sparsity constraints are imposed on the hidden nodes, the formula:
Whereinis the average activation value of the hidden node, the variable m is the training times, [ a ]j(x(i))]Indicating the activation values of the j-th layer hidden node and the i-th layer sample.
For the training set, to avoid learning the same function and to improve the ability to capture important information, it is necessary to set the average activation of each hidden node j to zero or close to zero. Thus, additional penalty factors are:
Where ρ is the sparsity parameter and KL (-) is the Kullback-Leibler divergence, used as a penalty measure for the desired and actual distributions. If it is notThenWhen in useNear 0 or 1, the KL-divergence approaches infinity. The SAE cost function can be determined by:
Where J (W, b) is a loss function intended to make the producing output encoder as equivalent as possible to the input. The parameters (W, b) may be updated using a gradient descent algorithm.
And secondly, constructing a deep belief network model, wherein the deep belief network model is a composite optimized improved deep belief network model, and is hereinafter referred to as an improved DBN.
introduction of RBM: a Restricted Boltzmann Machine (RBM) is a stochastic neural network (i.e., a network whose neuron nodes when activated behave randomly takes on values). It comprises a visible layer and a hidden layer. Neurons in the same layer are independent of each other, while neurons in different network layers are interconnected (bi-directional connections). When the network is trained and used, information flows in two directions, and the weights in the two directions are the same. But the bias values are different (the number of bias values is the same as the number of neurons), and the upper layer of neurons constitute a hidden layer (hidden layer) and the values of the layer of neurons are hidden by h vectors. The neurons of the next layer constitute a visible layer (visible layer), and the values of the neurons of the visible layer are represented by v vectors. The connection weights may be represented by a matrix W. The difference with DNN is that RBM does not distinguish between forward and reverse, the state of the visible layer can be applied to the hidden layer, and the state of the hidden layer can also be applied to the visible layer. The bias coefficient for the hidden layer is vector b and the bias coefficient for the visible layer is vector a.
RBM model parameters: mainly comprising a weight matrix w, bias coefficient vectors a and b, a hidden layer neuron state vector h and a visible layer neuron state vector v.
In the method, the DBN consists of BB-RBM and GB-RBM: .
S3, constructing BB-RBM model
For Bernoulli RBM (Bernoulli-Bernoulli RBM, BB-RBM), the visible cells and the hidden cells are both binary random cells. The energy function is:
E(v,h)=-aTv-bTh-vTwh (formula 9)
Wherein, the weight matrix W, the bias coefficient vectors a and b, the neuron state vector h of the hidden layer and the neuron state vector v of the visible layer are parameters of the RBM. From fig.5 we can see that the model is divided into two groups of units: v and h, their offsets correspond to a and b, and the interaction between them is described by w.
the joint probability distribution of the model is defined as, according to the energy function:
Wherein Z ═ Σv,he-E(v,h)Is a distribution function.
"constraint" means that there is no connection between homogeneous nodes of the RBM model, which means that conditional independence holds between hidden layer units (or visible units). In BB-RBM, all units are binary random cells, which means that the input data should be binary, or a real value between 0 and 1 indicates the probability that the visible cell is active or inactive.
The conditional probability distribution for each unit is given by the sigmoid function of the input it receives:
In the formula, σ (x) ═ 1/(1+ exp (-x)) is a sigmoid activation function.
s4, constructing GB-RBM model
Unlike BB-RBM, the energy function of GB-RBM is defined as:
Where σ is the standard deviation of gaussian noise for v.
the conditional probability distribution of the model is the same as BB-RBM, and the joint probability distribution:
In the formula, viThe real values are taken, obeying a gaussian distribution with mean μ and variance σ.
S5 Gibbs sampling
based on the above model, the RBM, i.e., the tuning parameter θ, needs to be trained to fit a given training sample. The maximum likelihood estimation is a commonly used estimation method of model parameters, and applied in this document, namely, the parameter θ is found to make the probability of all training data x under the distribution maximum, so that the training problem of the RBM can be converted into the most value problem for solving the likelihood function.
given a training set, the model log-likelihood for each training sample can be expressed as:
Where θ ═ b, a, W, D is the training data set.
the gradient can be expressed as:
Wherein EP*Is an empirical distribution P*Expected value of x, EPIs the expected value under the model distribution P.
Although E isP*[x·h]Can be easily calculated from the training data; but EP[x·h]Corresponding to all the possible values of v and h, all the possible numerical value combinations of the visible unit and the recessive unit need to be traversed, and the combination number is in an exponential relationship, so that the combination is difficult to directly calculate. Can be generallyTo obtain E by Gibbs sampling (see FIG. 3)PIs used to estimate the expectation.
s6 CD-k algorithm
In order to accelerate the training process of the RBM, the contrast divergence CD-k algorithm is adopted for carrying out unsupervised learning. Due to the fact that in the CD-k algorithm (k represents the sampling times), when k is equal to 1, only one-step Gibbs sampling is carried out, and a good fitting effect can be achieved. The values of the parameters are generally fitted using the form of the CD-1 algorithm.
W←W+ε×[p(h=1|v)vT-p(h*=1|v*)v*T]
b←b+ε×(v-v*)
c←c+ε×[p(h=1|v)-p(h*=1|v*)](formula 17)
Where the reconstruction of the visual layer v is v*from the reconstructed visual layer v*The resulting hidden layer is h*. And setting the learning efficiency as epsilon, training the RBM through a contrast divergence algorithm, and then weighting a matrix W, a bias vector b of a visual layer and a bias vector c of a hidden layer.
S7 construction of improved DBN model
The key problem of applying the DBN to the load prediction problem under the complex factors is how to construct a proper prediction model and how to train the constructed prediction model effectively. In the embodiment of the invention, the DBN model shown in fig. 4 is improved to be suitable for solving the short-term power load prediction problem.
The improved DBN model can be composed of one GB-RBM, a plurality of BB-RBMs and one linear regression output layer: (1) taking the GB-RBM as a first RBM stacked to form a DBN, so that continuous real-value data such as weather data, load data and the like in input data can be effectively converted into binary data; (2) because the BB-RBM is suitable for a modeling process for processing binary data (such as black and white images or coded texts), other RBMs all adopt the BB-RBM to realize the feature extraction of input data; (3) and the hidden layer and the output layer of the last RBM form a linear regression network structure, the characteristic vector extracted by the improved DBN is used as an input, and the linear activation function processing is carried out to obtain a power load time sequence y with the time interval of 15 minutes, 30 minutes or 1 hour.
thirdly, pre-training the model by adopting unsupervised training and finely adjusting the parameters by adopting a BP algorithm
the DBN is pre-trained by adopting unsupervised learning, so that the classification task of voice recognition is effectively completed, and a better parameter basis is provided for parameter fine tuning of the next step. In the process of hybrid pre-training, a temporary output layer needs to be stacked on the BB-RBM and the GB-RBM to be trained, so that the integrity of a prediction model is ensured.
S8 unsupervised pre-training
Unsupervised learning is applied to the deep belief network. The sparse self-coding neural network, namely a sparse automatic encoder model is adopted as a preprocessing tool of data in deep learning, the sparse automatic encoder model trains sparse self-coding parameters, and reconstructed data are searched in the sparse self-coding modelBringing it close to the original data a, i.e.:
Extracting M training samples, and calculating a reconstruction error function C:
Wherein y is a coefficient vector,w is the weight coefficient of the training sample, and through an optimization formula:
to obtain the coefficient y and the weighting coefficient w of the sample.
Wherein,Is a sparsity penalty factor.
S9, fine adjustment of parameters by BP algorithm
After layer-by-layer hybrid pre-training provides good network parameters for the improved DBN model, the BP algorithm is used herein for global parameter tuning. The BP neural network is a multilayer feedforward bionic algorithm and has the characteristics of good forward transmission of information and back propagation of errors. By repeating the cycle continuously to achieve the expected error, the model is trained to meet the expected result. However, since the BP neural network has the problems of slow learning speed, low precision and the like, a measure for improving the convergence speed of the back propagation algorithm is adopted, that is, an impulse term is added, that is:
Δωij(n+1)=ηδjοi+αΔωij(n) (0 < alpha < 1) (formula 21)
In the formula:
Fourth, the experimental setup
The load test experiment sample data is based on actual load data of 2016 (1 month) to 2017 (12 months) in certain areas of China, and comprises four main measurement variables: weather data (temperature, precipitation, wind speed and solar radiation), day type data, electric quantity data and time-of-use electricity price data (7: 00-11: 00 at peak time, 19: 00-23: 00 at normal time, 11: 00-19: 00 at valley time, 23: 00-7: 00 at next day). The weather data originates from a weather website, the collection frequency is 1h, and the load data is analyzed by selecting hourly data corresponding to the weather data.
in order to compare the performance of the algorithm, in the most common method for fine tuning parameters of the DBN, a standard BP algorithm (S-BP for short) based on gradient descent is selected and compared with a BP algorithm (I-BP for short) which adds a impulse term on a weight updating rule. Two BP algorithms, namely S-BP and I-BP, are set for fine tuning on the basis of pre-training, and two different optimization strategies, namely 'hybrid training + I-BP fine tuning' and 'hybrid training + S-BP fine tuning', are formed.
The prediction result fluctuation of the optimization strategy of 'hybrid training + S-BP fine tuning' is the largest, which is probably because the loss function is converged to the local optimum due to the optimization of the S-BP algorithm on the multi-layer network parameters; the 'hybrid training + I-BP fine tuning' is relatively stable, which is probably because the I-BP algorithm adds impulse terms to play a role in increasing the search step length to a certain extent, thereby crossing some narrow local minimum values and reaching smaller places.
The evaluation and comparison of the model are measured by Mean Absolute Percentage Error (MAPE), and the MAPE has good stability, so that the MAPE can be used as a reference of various evaluation standards and is calculated as follows:
Wherein N represents the number of samples of the measured load, yl(k) AndRespectively expressed as measured load and predicted load at 1 hour of day k.
Fifth, evaluation of Performance
(1) Influence of network structure on prediction model
The improved DBN model built for short-term power load prediction is a trust for the capacity of the GB-RBM to process real-valued data. FIG.5 shows the results of comparative experiments between the improved DBN model and the first RBM of the model using the DBN (B-DBN) model of BB-RBM. And performing pre-training and BP fine adjustment on the two models to optimize model parameters.
FIG.5 is a graph of comparative test results of a deep belief network prediction model of the present invention and a DBN model using two BB-RBMs; the deep belief network prediction model is an improved DBN, and G-DBN is used for representing the deep belief network prediction model; B-DBN represents a DBN model adopting two BB-RBMs; FIG.5 is a comparison of the predicted results of the two models, G-DBN and B-DBN. The instability of the B-DBN prediction effect is due to the fact that BB-RBMs are prone to generate noise when processing real-valued data. On the contrary, the prediction effect of G-DBN is better.
(2) Comparison of different prediction methods
In order to further verify the feasibility of the embodiment, the loads of 2017 seasons in a certain region are respectively predicted, and the training sample set and the testing sample set are composed of historical data of 10 months before the day to be predicted. Selecting a common artificial intelligence prediction method: the BP neural network, SVM method and traditional DBN method (unsupervised learning pre-training and S-BP algorithm fine tuning) are compared. In order to ensure objectivity, the experimental results are average values obtained by performing 100 experiments.
Referring to FIG. 6, in FIG. 6, a deep belief network prediction model of the present invention is represented by G-DBN, and a conventional DBN model is represented by DBN; comparing the predictions of different methods, the prediction error of the deep belief network prediction model of the invention is as follows: the mean four seasons was 3.59% MAPE, less than the other three methods. The improved DBN can more fully exploit the complex relationship between multiple influencing factors and the power load, taking into account the effects of temperature, light intensity and time of use electricity prices.
With the increasing permeability of renewable energy power generation (mainly photovoltaic power generation and wind power generation), more than 20% of annual power needs of some countries come from wind energy and solar energy, the output of photovoltaic power generation and wind power generation in partial hours of some regions even exceeds 50% of load, and the problems of fluctuation and uncertainty in the operation of a power system are more prominent. In order to verify the generalization performance of the method, the loads of three different regions with the power generation output ratio of the renewable energy source of about 30%, about 20% and about 10% are obtained as input samples of a comparison test, and MAPE is used as an evaluation index.
referring to fig. 7, G-DBN in fig. 7 represents a deep belief network prediction model of the present invention, and DBN represents a conventional DBN model. Fig. 7 shows that when the renewable energy output ratio is increased, the prediction errors of the methods become larger, the BP and SVM change obviously, and the conventional DBN model and the deep belief network prediction model of the present invention slightly change. This is probably because as the output duty ratio of renewable energy increases, the power system operates more unstably, the nonlinear load curve is more complex, and the advantage of deep network fitting of the complex nonlinear curve is more obvious.
the power load prediction is an important component of power system planning and a basis of power system economic operation, and provides an important basis for power distribution network management decision and operation mode. The power load prediction method based on the deep belief network improves the problem of adoption of the existing neural network algorithm learning on historical data and improves learning efficiency. Simulation results show that compared with a traditional neural network algorithm, the prediction accuracy of the power load prediction method based on the deep belief network is improved.
The invention also provides a preferred embodiment of the power load prediction system based on the deep belief network, which comprises a data preprocessing unit and a deep belief network prediction model unit; wherein: the data preprocessing unit comprises a normalization preprocessing subunit and a sparse self-coding neural network subunit, the normalization preprocessing subunit performs normalization preprocessing on historical data of the power load, the processed data are input to the sparse self-coding neural network subunit, and the sparse self-coding neural network subunit performs aggregation processing on the input historical data of the power load; the deep belief network prediction model unit sequentially comprises the following components from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine, the Bernoulli-Bernoulli limited Boltzmann machine and the linear regression output layer are pre-trained by adopting an unsupervised training mode, then parameter fine tuning is carried out by adopting a BP algorithm added with a momentum term, historical data processed by a data preprocessing unit are input, and a predicted value of the power load is output.
The data preprocessing unit is used for comprehensively considering various load-influencing factors such as date, weather and demand side management information and dividing each data type in detail to form an input feature vector of the power load prediction model; and inputting the feature vector features into a plurality of neural networks for two-layer sparse self-coding to perform feature fusion.
A deep belief network prediction model unit; pre-training the model by adopting unsupervised training, and carrying out parameter fine adjustment by adopting a BP algorithm; and training the load data to obtain output data and obtain a predicted value of the load of the power system.
The deep belief network prediction model unit comprises: GB-RAM, BB-RAM and linear regression output layer, GB-RAM comprises hidden layer and visible layer, and the units between two layers are connected, but no link exists between every two units on the same layer. The BB-RAM is composed of a hidden layer and a visible layer, the units between the two layers are connected with each other, but no link exists between every two units on the same layer.
the energy function of GB-RBM is
Where w, a, and b are parameters of the RBM. v and h, their offsets correspond to a and b, the interaction between which is described by w, where σ is the standard deviation of the gaussian noise of v.
BB-RBM has an energy function of
E(v,h)=-aTv-bTh-vTwh
The deep belief network prediction model unit is specifically used for training a model by adopting an unsupervised training method to optimize parameters of the deep belief network prediction model unit, and a temporary output layer is stacked on GB-RAM and BB-RAM in the optimizing process to ensure the integrity of the prediction model; and performing global fine adjustment on the deep belief network prediction model unit by adopting an I-BP algorithm to determine the topological structure of the deep belief network prediction model unit.
And the deep belief network prediction model unit outputs data to become a predicted value of the load of the power system.
The invention adopts the self-encoder to aggregate the comprehensive historical data, adopts a multilayer limited Boltzmann mechanism to form a deep belief network prediction model unit, and improves the learning performance and the prediction precision through an unsupervised training model. The existing neural network algorithm learning is improved, and the learning efficiency is improved while historical data are analyzed. Simulation results show that compared with a traditional neural network prediction system, the power load prediction accuracy of the power load prediction system based on the deep belief network is improved.
the above-mentioned embodiments are only for illustrating the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and to carry out the same, and the present invention shall not be limited to the embodiments, i.e. the equivalent changes or modifications made within the spirit of the present invention shall fall within the scope of the present invention.
Claims (10)
1. A power load prediction method based on a deep belief network is characterized in that a sparse self-coding neural network is adopted to carry out aggregation processing on historical data of a power load; constructing a composite optimized deep belief network prediction model based on a restricted Boltzmann machine; the deep belief network prediction model comprises the following components in sequence from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine are pre-trained by adopting an unsupervised training method, and then parameter fine tuning is carried out by adopting a BP algorithm with an impulse term; and inputting the aggregated historical data into the deep belief network prediction model for prediction.
2. The deep belief network-based power load prediction method of claim 1, wherein historical data of the power load is collected by dividing each data type to form an input feature vector, taking into account factors of date, weather and demand side management information.
3. The deep belief network-based power load prediction method of claim 1, wherein a normalization pre-processing is performed on historical data of the power load before the data are aggregated by using a sparse self-coding neural network.
4. The deep belief network-based power load prediction method of claim 1, wherein the sparse self-encoding neural network employs a two-layer or three-layer neural network; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
5. The deep belief network-based power load prediction method of claim 1, wherein the gaussian-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine are each constructed using two layers, the two layers include a visible layer and a hidden layer, the units between the two layers are connected with each other, and no connection exists between every two units on the same layer.
6. a power load prediction system based on a deep belief network is characterized by comprising a data preprocessing unit and a deep belief network prediction model unit; wherein: the data preprocessing unit comprises a sparse self-coding neural network subunit, and the sparse self-coding neural network subunit inputs historical data of the power load and performs aggregation processing; the deep belief network prediction model unit sequentially comprises the following components from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine, the Bernoulli-Bernoulli limited Boltzmann machine and the linear regression output layer are pre-trained by adopting an unsupervised training mode, then parameter fine tuning is carried out by adopting a BP algorithm added with a momentum term, historical data processed by a data preprocessing unit are input, and a predicted value of the power load is output.
7. the deep belief network-based power load prediction system of claim 6, wherein the data pre-processing unit further comprises a normalization pre-processing subunit; and the normalization preprocessing subunit performs normalization preprocessing on the historical data of the power load, and the processed data is output to the sparse self-coding neural network subunit.
8. the deep belief network-based power load prediction system of claim 6, wherein the sparse self-encoding neural network subunit comprises a two-layer or three-layer neural network; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
9. The deep belief network-based power load prediction system of claim 6, wherein the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine each comprise two layers, a hidden layer and a visible layer, respectively, wherein the units between the two layers are connected to each other, and no connection exists between any two units on the same layer.
10. The deep belief network-based power load prediction system of claim 9, wherein when the deep belief network prediction model unit is trained, the gaussian-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine are each stacked with a temporary output layer, and then are each pre-trained in an unsupervised training mode, and then are further parameter-fine-tuned by using a Back Propagation (BP) algorithm with an impulse term.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910722953.1A CN110580543A (en) | 2019-08-06 | 2019-08-06 | Power load prediction method and system based on deep belief network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910722953.1A CN110580543A (en) | 2019-08-06 | 2019-08-06 | Power load prediction method and system based on deep belief network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110580543A true CN110580543A (en) | 2019-12-17 |
Family
ID=68810919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910722953.1A Pending CN110580543A (en) | 2019-08-06 | 2019-08-06 | Power load prediction method and system based on deep belief network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110580543A (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028512A (en) * | 2019-12-31 | 2020-04-17 | 福建工程学院 | Real-time traffic prediction method and device based on sparse BP neural network |
CN111144643A (en) * | 2019-12-24 | 2020-05-12 | 天津相和电气科技有限公司 | Day-ahead power load prediction method and device based on double-end automatic coding |
CN111366889A (en) * | 2020-04-29 | 2020-07-03 | 云南电网有限责任公司电力科学研究院 | Abnormal electricity utilization detection method for intelligent electric meter |
CN111598225A (en) * | 2020-05-15 | 2020-08-28 | 西安建筑科技大学 | Air conditioner cold load prediction method based on adaptive deep confidence network |
CN112016799A (en) * | 2020-07-15 | 2020-12-01 | 北京淇瑀信息科技有限公司 | Resource quota allocation method and device and electronic equipment |
CN112036598A (en) * | 2020-06-24 | 2020-12-04 | 国网天津市电力公司电力科学研究院 | Charging pile use information prediction method based on multi-information coupling |
CN112232547A (en) * | 2020-09-09 | 2021-01-15 | 国网浙江省电力有限公司营销服务中心 | Special transformer user short-term load prediction method based on deep belief neural network |
CN112381297A (en) * | 2020-11-16 | 2021-02-19 | 国家电网公司华中分部 | Method for predicting medium-term and long-term electricity consumption in region based on social information calculation |
CN112418504A (en) * | 2020-11-17 | 2021-02-26 | 西安热工研究院有限公司 | Wind speed prediction method based on mixed variable selection optimization deep belief network |
CN112418526A (en) * | 2020-11-24 | 2021-02-26 | 国网天津市电力公司 | Comprehensive energy load control method and device based on improved deep belief network |
CN112580853A (en) * | 2020-11-20 | 2021-03-30 | 国网浙江省电力有限公司台州供电公司 | Bus short-term load prediction method based on radial basis function neural network |
CN112578896A (en) * | 2020-12-18 | 2021-03-30 | Oppo(重庆)智能科技有限公司 | Frequency adjusting method, frequency adjusting device, electronic apparatus, and storage medium |
CN112650894A (en) * | 2020-12-30 | 2021-04-13 | 国网甘肃省电力公司营销服务中心 | Multidimensional analysis and diagnosis method for user electricity consumption behaviors based on combination of analytic hierarchy process and deep belief network |
CN113297791A (en) * | 2021-05-18 | 2021-08-24 | 四川大川云能科技有限公司 | Wind power combined prediction method based on improved DBN |
CN113378464A (en) * | 2021-06-09 | 2021-09-10 | 国网天津市电力公司营销服务中心 | Method and device for predicting service life of electric energy meter field tester |
CN113822475A (en) * | 2021-09-15 | 2021-12-21 | 浙江浙能技术研究院有限公司 | Thermal load prediction and control method for auxiliary machine fault load reduction working condition of steam extraction heat supply unit |
CN113837486A (en) * | 2021-10-11 | 2021-12-24 | 云南电网有限责任公司 | RNN-RBM-based distribution network feeder long-term load prediction method |
CN114913380A (en) * | 2022-06-15 | 2022-08-16 | 齐鲁工业大学 | Feature extraction method and system based on multi-core collaborative learning and deep belief network |
CN115578122A (en) * | 2022-10-17 | 2023-01-06 | 国网山东省电力公司淄博供电公司 | Load electricity price prediction method based on sparse self-coding nonlinear autoregressive network |
CN115936060A (en) * | 2022-12-28 | 2023-04-07 | 四川物通科技有限公司 | Transformer substation capacitance temperature early warning method based on depth certainty strategy gradient |
CN117094361A (en) * | 2023-10-19 | 2023-11-21 | 北京中科汇联科技股份有限公司 | Method for selecting parameter efficient fine adjustment module |
CN117937464A (en) * | 2024-01-09 | 2024-04-26 | 广东电网有限责任公司广州供电局 | Short-term power load prediction method based on PSR-DBN (Power System support-direct-base network) combined model |
CN118569453A (en) * | 2024-08-01 | 2024-08-30 | 四川仕虹腾飞信息技术有限公司 | Method and system for predicting flyer in financial sales process of banking outlets |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140156575A1 (en) * | 2012-11-30 | 2014-06-05 | Nuance Communications, Inc. | Method and Apparatus of Processing Data Using Deep Belief Networks Employing Low-Rank Matrix Factorization |
CN107330357A (en) * | 2017-05-18 | 2017-11-07 | 东北大学 | Vision SLAM closed loop detection methods based on deep neural network |
CN107730039A (en) * | 2017-10-10 | 2018-02-23 | 中国南方电网有限责任公司电网技术研究中心 | Method and system for predicting load of power distribution network |
CN108664690A (en) * | 2018-03-24 | 2018-10-16 | 北京工业大学 | Long-life electron device reliability lifetime estimation method under more stress based on depth belief network |
CN110009160A (en) * | 2019-04-11 | 2019-07-12 | 东北大学 | A kind of power price prediction technique based on improved deepness belief network |
-
2019
- 2019-08-06 CN CN201910722953.1A patent/CN110580543A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140156575A1 (en) * | 2012-11-30 | 2014-06-05 | Nuance Communications, Inc. | Method and Apparatus of Processing Data Using Deep Belief Networks Employing Low-Rank Matrix Factorization |
CN107330357A (en) * | 2017-05-18 | 2017-11-07 | 东北大学 | Vision SLAM closed loop detection methods based on deep neural network |
CN107730039A (en) * | 2017-10-10 | 2018-02-23 | 中国南方电网有限责任公司电网技术研究中心 | Method and system for predicting load of power distribution network |
CN108664690A (en) * | 2018-03-24 | 2018-10-16 | 北京工业大学 | Long-life electron device reliability lifetime estimation method under more stress based on depth belief network |
CN110009160A (en) * | 2019-04-11 | 2019-07-12 | 东北大学 | A kind of power price prediction technique based on improved deepness belief network |
Non-Patent Citations (6)
Title |
---|
XIAOYU ZHANG: "Short-Term Load Forecasting Based on a Improved Deep Belief Network", 《2016 INTERNATIONAL CONFERENCE ON SMART GRID AND CLEAN ENERGY TECHNOLOGIES》 * |
XIAOYU ZHANG: "Short-Term Load Forecasting Using a Novel Deep Learning Framework", 《ENERGIES》 * |
孔祥玉: "Improved Deep Belief Network for Short-Term Load Forecasting Considering Demand-Side Management", 《IEEE TRANSACTIONS ON POWER SYSTEMS》 * |
孔祥玉: "基于深度信念网络的短期负荷预测方法", 《电力系统自动化》 * |
孙海蓉: "基于深度学习的短时热网负荷预测", 《计算机仿真》 * |
杨智宇: "基于自适应深度信念网络的变电站负荷预测", 《中国电机工程学报》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144643A (en) * | 2019-12-24 | 2020-05-12 | 天津相和电气科技有限公司 | Day-ahead power load prediction method and device based on double-end automatic coding |
CN111028512A (en) * | 2019-12-31 | 2020-04-17 | 福建工程学院 | Real-time traffic prediction method and device based on sparse BP neural network |
CN111366889A (en) * | 2020-04-29 | 2020-07-03 | 云南电网有限责任公司电力科学研究院 | Abnormal electricity utilization detection method for intelligent electric meter |
CN111366889B (en) * | 2020-04-29 | 2022-01-25 | 云南电网有限责任公司电力科学研究院 | Abnormal electricity utilization detection method for intelligent electric meter |
CN111598225B (en) * | 2020-05-15 | 2023-05-02 | 西安建筑科技大学 | Air conditioner cold load prediction method based on self-adaptive deep confidence network |
CN111598225A (en) * | 2020-05-15 | 2020-08-28 | 西安建筑科技大学 | Air conditioner cold load prediction method based on adaptive deep confidence network |
CN112036598A (en) * | 2020-06-24 | 2020-12-04 | 国网天津市电力公司电力科学研究院 | Charging pile use information prediction method based on multi-information coupling |
CN112016799A (en) * | 2020-07-15 | 2020-12-01 | 北京淇瑀信息科技有限公司 | Resource quota allocation method and device and electronic equipment |
CN112232547A (en) * | 2020-09-09 | 2021-01-15 | 国网浙江省电力有限公司营销服务中心 | Special transformer user short-term load prediction method based on deep belief neural network |
CN112232547B (en) * | 2020-09-09 | 2023-12-12 | 国网浙江省电力有限公司营销服务中心 | Special transformer user short-term load prediction method based on deep confidence neural network |
CN112381297A (en) * | 2020-11-16 | 2021-02-19 | 国家电网公司华中分部 | Method for predicting medium-term and long-term electricity consumption in region based on social information calculation |
CN112418504B (en) * | 2020-11-17 | 2023-02-28 | 西安热工研究院有限公司 | Wind speed prediction method based on mixed variable selection optimization deep belief network |
CN112418504A (en) * | 2020-11-17 | 2021-02-26 | 西安热工研究院有限公司 | Wind speed prediction method based on mixed variable selection optimization deep belief network |
CN112580853A (en) * | 2020-11-20 | 2021-03-30 | 国网浙江省电力有限公司台州供电公司 | Bus short-term load prediction method based on radial basis function neural network |
CN112418526A (en) * | 2020-11-24 | 2021-02-26 | 国网天津市电力公司 | Comprehensive energy load control method and device based on improved deep belief network |
CN112578896A (en) * | 2020-12-18 | 2021-03-30 | Oppo(重庆)智能科技有限公司 | Frequency adjusting method, frequency adjusting device, electronic apparatus, and storage medium |
CN112650894A (en) * | 2020-12-30 | 2021-04-13 | 国网甘肃省电力公司营销服务中心 | Multidimensional analysis and diagnosis method for user electricity consumption behaviors based on combination of analytic hierarchy process and deep belief network |
CN113297791A (en) * | 2021-05-18 | 2021-08-24 | 四川大川云能科技有限公司 | Wind power combined prediction method based on improved DBN |
CN113297791B (en) * | 2021-05-18 | 2024-02-06 | 四川大川云能科技有限公司 | Wind power combination prediction method based on improved DBN |
CN113378464A (en) * | 2021-06-09 | 2021-09-10 | 国网天津市电力公司营销服务中心 | Method and device for predicting service life of electric energy meter field tester |
CN113822475A (en) * | 2021-09-15 | 2021-12-21 | 浙江浙能技术研究院有限公司 | Thermal load prediction and control method for auxiliary machine fault load reduction working condition of steam extraction heat supply unit |
CN113822475B (en) * | 2021-09-15 | 2023-11-21 | 浙江浙能技术研究院有限公司 | Thermal load prediction and control method for auxiliary machine fault load-reduction working condition of steam extraction heat supply unit |
CN113837486A (en) * | 2021-10-11 | 2021-12-24 | 云南电网有限责任公司 | RNN-RBM-based distribution network feeder long-term load prediction method |
CN113837486B (en) * | 2021-10-11 | 2023-08-22 | 云南电网有限责任公司 | RNN-RBM-based distribution network feeder long-term load prediction method |
CN114913380A (en) * | 2022-06-15 | 2022-08-16 | 齐鲁工业大学 | Feature extraction method and system based on multi-core collaborative learning and deep belief network |
CN115578122A (en) * | 2022-10-17 | 2023-01-06 | 国网山东省电力公司淄博供电公司 | Load electricity price prediction method based on sparse self-coding nonlinear autoregressive network |
CN115936060A (en) * | 2022-12-28 | 2023-04-07 | 四川物通科技有限公司 | Transformer substation capacitance temperature early warning method based on depth certainty strategy gradient |
CN115936060B (en) * | 2022-12-28 | 2024-03-26 | 四川物通科技有限公司 | Substation capacitance temperature early warning method based on depth deterministic strategy gradient |
CN117094361A (en) * | 2023-10-19 | 2023-11-21 | 北京中科汇联科技股份有限公司 | Method for selecting parameter efficient fine adjustment module |
CN117094361B (en) * | 2023-10-19 | 2024-01-26 | 北京中科汇联科技股份有限公司 | Method for selecting parameter efficient fine adjustment module |
CN117937464A (en) * | 2024-01-09 | 2024-04-26 | 广东电网有限责任公司广州供电局 | Short-term power load prediction method based on PSR-DBN (Power System support-direct-base network) combined model |
CN118569453A (en) * | 2024-08-01 | 2024-08-30 | 四川仕虹腾飞信息技术有限公司 | Method and system for predicting flyer in financial sales process of banking outlets |
CN118569453B (en) * | 2024-08-01 | 2024-10-08 | 四川仕虹腾飞信息技术有限公司 | Method and system for predicting flyer in financial sales process of banking outlets |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110580543A (en) | Power load prediction method and system based on deep belief network | |
Ke et al. | Short-term electrical load forecasting method based on stacked auto-encoding and GRU neural network | |
Shamshirband et al. | A survey of deep learning techniques: application in wind and solar energy resources | |
Tian | Short-term wind speed prediction based on LMD and improved FA optimized combined kernel function LSSVM | |
Tang et al. | Short‐term power load forecasting based on multi‐layer bidirectional recurrent neural network | |
Raza et al. | An ensemble framework for day-ahead forecast of PV output power in smart grids | |
Duan et al. | A combined short-term wind speed forecasting model based on CNN–RNN and linear regression optimization considering error | |
Yin et al. | Deep forest regression for short-term load forecasting of power systems | |
CN105354646B (en) | Power load forecasting method for hybrid particle swarm optimization and extreme learning machine | |
CN112418526A (en) | Comprehensive energy load control method and device based on improved deep belief network | |
CN109785618B (en) | Short-term traffic flow prediction method based on combinational logic | |
CN106709820A (en) | Power system load prediction method and device based on deep belief network | |
Bendali et al. | Deep learning using genetic algorithm optimization for short term solar irradiance forecasting | |
CN115688579A (en) | Basin multi-point water level prediction early warning method based on generation of countermeasure network | |
CN113554466A (en) | Short-term power consumption prediction model construction method, prediction method and device | |
CN115115125B (en) | Photovoltaic power interval probability prediction method based on deep learning fusion model | |
CN109583588B (en) | Short-term wind speed prediction method and system | |
CN115392387B (en) | Low-voltage distributed photovoltaic power generation output prediction method | |
CN116384572A (en) | Sequence-to-sequence power load prediction method based on multidimensional gating circulating unit | |
CN115481788A (en) | Load prediction method and system for phase change energy storage system | |
CN118134284A (en) | Deep learning wind power prediction method based on multi-stage attention mechanism | |
Xu et al. | A novel hybrid wind speed interval prediction model based on mode decomposition and gated recursive neural network | |
Lin et al. | A Novel Multi-Model Stacking Ensemble Learning Method for Metro Traction Energy Prediction | |
CN116822722A (en) | Water level prediction method, system, device, electronic equipment and medium | |
CN115759343A (en) | E-LSTM-based user electric quantity prediction method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191217 |
|
WD01 | Invention patent application deemed withdrawn after publication |