CN110580543A - A Power Load Forecasting Method and System Based on Deep Belief Network - Google Patents

A Power Load Forecasting Method and System Based on Deep Belief Network Download PDF

Info

Publication number
CN110580543A
CN110580543A CN201910722953.1A CN201910722953A CN110580543A CN 110580543 A CN110580543 A CN 110580543A CN 201910722953 A CN201910722953 A CN 201910722953A CN 110580543 A CN110580543 A CN 110580543A
Authority
CN
China
Prior art keywords
layer
bernoulli
deep belief
belief network
boltzmann machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910722953.1A
Other languages
Chinese (zh)
Inventor
孔祥玉
胡天宇
李闯
郭家良
屈璐瑶
田龙飞
邓泽强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910722953.1A priority Critical patent/CN110580543A/en
Publication of CN110580543A publication Critical patent/CN110580543A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种基于深度信念网络的电力负荷预测方法,采用稀疏自编码神经网络对电力负荷的历史数据进行聚合处理;基于受限玻尔兹曼机,构建复合优化的深度信念网络预测模型;所述深度信念网络预测模型,其从输入至输出依次包括:高斯‑伯努利受限玻尔兹曼机、伯努利‑伯努利受限玻尔兹曼机和线性回归输出层,其先采用无监督训练方法进行预训练,再采用加冲量项的BP算法进行参数微调;将聚合处理后的历史数据输入所述深度信念网络预测模型中进行预测。本发明还公开了一种基于深度信念网络的电力负荷预测系统。本发明可更好地挖掘历史负荷数据的规律性,从而提高预测的效率,同时可充分考虑不同因素的影响,提高了预测精度。

The invention discloses a power load forecasting method based on a deep belief network, which uses a sparse self-encoding neural network to aggregate historical data of power loads; based on a restricted Boltzmann machine, constructs a composite optimized deep belief network forecasting model The prediction model of the deep belief network, which includes successively from input to output: Gauss-Bernoulli restricted Boltzmann machine, Bernoulli-Bernoulli restricted Boltzmann machine and linear regression output layer, It first uses the unsupervised training method for pre-training, and then uses the BP algorithm with the impulse item to fine-tune the parameters; the aggregated historical data is input into the deep belief network prediction model for prediction. The invention also discloses a power load forecasting system based on a deep belief network. The invention can better excavate the regularity of historical load data, thereby improving the efficiency of prediction, and can fully consider the influence of different factors at the same time, thereby improving the prediction accuracy.

Description

一种基于深度信念网络的电力负荷预测方法及系统A Power Load Forecasting Method and System Based on Deep Belief Network

技术领域technical field

本发明涉及一种电力负荷预测方法及系统,特别涉及一种基于深度信念网络的电力负荷预测方法及系统。The present invention relates to a power load forecasting method and system, in particular to a power load forecasting method and system based on a deep belief network.

背景技术Background technique

目前,对电力系统的管理调度来说,短期负荷预测可以说是最为重要的,其可以为发电计划提供数据支持,来确定最满足经济要求、安全要求、环境自然要求和设备限制要求的发电计划,确保电力系统的经济安全运行。目前,随着电力系统的不断发展完善,对电网短期负荷预测提出了更高的要求。虽然已经有了比较成熟的传统方法,但是传统方法普遍有预测精准度不高的问题,预测出来的结果虽然有着一定的参考价值,但是还远远达不到电力企业的期望水平。At present, short-term load forecasting is the most important for the management and scheduling of the power system. It can provide data support for the power generation plan to determine the power generation plan that best meets the economic requirements, safety requirements, environmental natural requirements and equipment restrictions. , to ensure the economical and safe operation of the power system. At present, with the continuous development and improvement of the power system, higher requirements are put forward for the short-term load forecasting of the power grid. Although there are relatively mature traditional methods, the traditional methods generally have the problem of low prediction accuracy. Although the predicted results have certain reference value, they are still far from the expected level of power companies.

随着现代科学理论研究的不断进步,出现了新一批的新兴电力负荷预测手段,如神经网路理论、模糊数学、支持向量机等,这些都是电力负荷预测的进步和发展,在预测精度上都有了进一步的提高。然而在现有技术中,单一的智能预测方法难以应对多维数据给预测精度和效率带来的挑战,而基于深度信念网络的预测方法综合多种数据处理方法与智能算法,可以取得较好的预测精度和效率。With the continuous progress of modern scientific theoretical research, a new batch of emerging power load forecasting methods have emerged, such as neural network theory, fuzzy mathematics, support vector machines, etc. These are the progress and development of power load forecasting. have been further improved. However, in the existing technology, a single intelligent prediction method is difficult to deal with the challenges brought by multi-dimensional data to the prediction accuracy and efficiency, while the prediction method based on deep belief network can achieve better prediction by combining multiple data processing methods and intelligent algorithms. precision and efficiency.

发明内容Contents of the invention

本发明为解决公知技术中存在的技术问题而提供一种基于深度信念网络的电力负荷预测方法及系统。The present invention provides a power load forecasting method and system based on a deep belief network in order to solve the technical problems existing in the known technology.

本发明为解决公知技术中存在的技术问题所采取的技术方案是:一种基于深度信念网络的电力负荷预测方法,采用稀疏自编码神经网络对电力负荷的历史数据进行聚合处理;基于受限玻尔兹曼机,构建复合优化的深度信念网络预测模型;所述深度信念网络预测模型,其从输入至输出依次包括:高斯-伯努利受限玻尔兹曼机、伯努利-伯努利受限玻尔兹曼机和线性回归输出层,其先采用无监督训练方法进行预训练,再采用加冲量项的BP算法进行参数微调;将聚合处理后的历史数据输入所述深度信念网络预测模型中进行预测。The technical solution adopted by the present invention to solve the technical problems existing in the known technology is: a power load forecasting method based on a deep belief network, which uses a sparse autoencoder neural network to aggregate historical data of power loads; The Boltzmann machine constructs a compound optimized deep belief network prediction model; the deep belief network prediction model includes in turn from input to output: Gauss-Bernoulli restricted Boltzmann machine, Bernoulli-Bernoulli Using the restricted Boltzmann machine and linear regression output layer, it first uses the unsupervised training method for pre-training, and then uses the BP algorithm with the impulse item to fine-tune the parameters; input the aggregated historical data into the deep belief network forecast in the forecasting model.

进一步地,电力负荷的历史数据采集时综合考虑日期、天气以及需求侧管理信息的因素,对每一种数据类型进行划分,形成输入特征向量。Furthermore, when the historical data of electric load is collected, factors such as date, weather, and demand-side management information are comprehensively considered, and each data type is divided to form an input feature vector.

进一步地,在采用稀疏自编码神经网络对数据进行聚合处理前,对电力负荷的历史数据进行归一化预处理。Further, before using the sparse self-encoder neural network to aggregate the data, the historical data of the electric load is normalized and preprocessed.

进一步地,所述稀疏自编码神经网络采用两层或三层神经网络;其中,两层神经网络包括输入层和输出层,三层神经网络包括输入层、隐藏层及输出层。Further, the sparse self-encoder neural network adopts a two-layer or three-layer neural network; wherein, the two-layer neural network includes an input layer and an output layer, and the three-layer neural network includes an input layer, a hidden layer, and an output layer.

进一步地,所述高斯-伯努利受限玻尔兹曼机和所述伯努利-伯努利受限玻尔兹曼机均采用两层构建,两层包括可见层和隐藏层,两层之间的单元互相连接,同层单元两两之间不存在连接。Further, both the Gauss-Bernoulli Restricted Boltzmann Machine and the Bernoulli-Bernoulli Restricted Boltzmann Machine are constructed with two layers, the two layers include a visible layer and a hidden layer, and two The units between layers are connected to each other, and there is no connection between two units of the same layer.

本发明还提供一种基于深度信念网络的电力负荷预测系统,包括数据预处理单元和深度信念网络预测模型单元;其中:所述数据预处理单元,其包括稀疏自编码神经网络子单元,所述稀疏自编码神经网络子单元输入电力负荷的历史数据并进行聚合处理;所述深度信念网络预测模型单元,其从输入至输出依次包括:高斯-伯努利受限玻尔兹曼机、伯努利-伯努利受限玻尔兹曼机和线性回归输出层,其先采用无监督训练模式进行预训练,再采用加冲量项的BP算法进行参数微调,其输入所述数据预处理单元处理后的历史数据,其输出电力负荷的预测值。The present invention also provides a power load forecasting system based on a deep belief network, including a data preprocessing unit and a deep belief network forecasting model unit; wherein: the data preprocessing unit includes a sparse self-encoder neural network subunit, the Sparse self-encoded neural network subunits input historical data of power loads and perform aggregation processing; the deep belief network prediction model unit includes in sequence from input to output: Gauss-Bernoulli restricted Boltzmann machine, Bernoulli Leigh-Bernoulli Restricted Boltzmann Machine and Linear Regression Output Layer, which first use the unsupervised training mode for pre-training, and then use the BP algorithm with impulse items to fine-tune the parameters, which are input to the data pre-processing unit for processing After the historical data, it outputs the predicted value of electric load.

进一步地,所述数据预处理单元还包括归一化预处理子单元;所述归一化预处理子单元对电力负荷的历史数据进行归一化预处理,其处理后的数据输出至所述稀疏自编码神经网络子单元。Further, the data preprocessing unit also includes a normalized preprocessing subunit; the normalized preprocessing subunit performs normalized preprocessing on the historical data of electric load, and the processed data is output to the Sparse self-encoding neural network subunits.

进一步地,所述稀疏自编码神经网络子单元包括两层或三层神经网络;其中,两层神经网络包括输入层和输出层,三层神经网络包括输入层、隐藏层及输出层。Further, the sparse self-encoder neural network subunit includes a two-layer or three-layer neural network; wherein, the two-layer neural network includes an input layer and an output layer, and the three-layer neural network includes an input layer, a hidden layer, and an output layer.

进一步地,所述高斯-伯努利受限玻尔兹曼机和所述伯努利-伯努利受限玻尔兹曼机均包括两层,分别为隐藏层和可见层,两层之间的单元互相连接,同层单元两两之间不存在连接。Further, the Gauss-Bernoulli Restricted Boltzmann Machine and the Bernoulli-Bernoulli Restricted Boltzmann Machine both include two layers, namely a hidden layer and a visible layer, and the layer between the two layers The units in the same layer are connected to each other, and there is no connection between two units in the same layer.

进一步地,所述深度信念网络预测模型单元在训练时,所述高斯-伯努利受限玻尔兹曼机和所述伯努利-伯努利受限玻尔兹曼机各自分别堆叠一层临时输出层,然后各自采用无监督训练模式进行预训练,再采用加冲量项的BP算法进行参数微调。Further, when the deep belief network prediction model unit is being trained, the Gauss-Bernoulli Restricted Boltzmann Machine and the Bernoulli-Bernoulli Restricted Boltzmann Machine are respectively stacked by one Layer temporary output layer, and then use the unsupervised training mode for pre-training, and then use the BP algorithm with the impulse item to fine-tune the parameters.

本发明具有的优点和积极效果是:本发明通过充分挖掘历史负荷数据中的规律性,然后将数据特征向量输入到稀疏自编码神经网络中进行特征融合,采用DBN模型进行负荷预测,并进行无监督训练对模型进行预训练,最后通过BP算法进行微调到最终的预测结果。本发明可更好地挖掘历史负荷数据的规律性,从而提高预测的效率,同时可充分考虑不同因素的影响,提高了预测精度。The advantages and positive effects of the present invention are: the present invention fully excavates the regularity in the historical load data, then inputs the data feature vector into the sparse self-encoder neural network for feature fusion, uses the DBN model for load prediction, and performs no Supervised training pre-trains the model, and finally fine-tunes the final prediction result through the BP algorithm. The invention can better excavate the regularity of historical load data, thereby improving the efficiency of prediction, and can fully consider the influence of different factors at the same time, thereby improving the prediction accuracy.

本发明提出一种基于深度信念网络的电力负荷预测方法及系统,改进现有神经网络算法学习对历史数据采用的问题的同时提高了学习效率。仿真结果显示,相比于传统神经网络算法,本发明预测准确性有所提高。The present invention proposes a power load forecasting method and system based on a deep belief network, which improves learning efficiency while improving the existing neural network algorithm learning problem of using historical data. The simulation results show that compared with the traditional neural network algorithm, the prediction accuracy of the present invention is improved.

与现有的几种预测方法及系统进行比较,本发明的预测四季电力负荷预测误差的平均值为3.59%MAPE,小于其他三种方法。考虑到温度,光照强度和使用时间电价的影响,本发明的一种基于深度信念网络的电力负荷预测方法及系统,可以更充分地采用多种影响因素和电力负荷之间的复杂关系。Comparing with several existing forecasting methods and systems, the average value of forecasting error of electric load forecasting in the four seasons of the present invention is 3.59% MAPE, which is smaller than the other three methods. Considering the influence of temperature, light intensity and usage time electricity price, the electric load forecasting method and system based on deep belief network of the present invention can more fully use the complex relationship between various influencing factors and electric load.

附图说明Description of drawings

图1为本发明的一种基于深度信念网络的电力负荷预测方法流程图;Fig. 1 is a kind of flow chart of electric load forecasting method based on deep belief network of the present invention;

图2为本发明中的稀疏自编码神经网络结构说明示意图;Fig. 2 is a schematic diagram illustrating the sparse self-encoder neural network structure in the present invention;

图3为本发明中的Gibbs采样方法示意图;Fig. 3 is the Gibbs sampling method schematic diagram among the present invention;

图4为本发明的深度信念网络预测模型流程图;Fig. 4 is the flowchart of deep belief network prediction model of the present invention;

图5为本发明的一种深度信念网络预测模型与采用两个BB-RBM的DBN模型的对比试验结果曲线图;Fig. 5 is a kind of deep belief network forecasting model of the present invention and adopts the comparative test result graph of the DBN model of two BB-RBMs;

图6为本发明的一种基于深度信念网络的电力负荷预测方法与BP神经网络预测方法、SVM预测方法及传统DBN方法进行对比的曲线图;Fig. 6 is the graph that a kind of power load forecasting method based on depth belief network of the present invention and BP neural network forecasting method, SVM forecasting method and traditional DBN method compare;

图7为可再生能源并网状况下,本发明的一种基于深度信念网络的电力负荷预测方法与BP神经网络预测方法、SVM预测方法及传统DBN预测方法的预测结果柱状对比图。Fig. 7 is a histogram comparison chart of prediction results of a power load prediction method based on a deep belief network of the present invention, a BP neural network prediction method, an SVM prediction method, and a traditional DBN prediction method under the grid-connected condition of renewable energy.

具体实施方式Detailed ways

为能进一步了解本发明的发明内容、特点及功效,兹列举以下实施例,并配合附图详细说明如下:In order to further understand the invention content, characteristics and effects of the present invention, the following embodiments are enumerated hereby, and detailed descriptions are as follows in conjunction with the accompanying drawings:

本申请中的英文缩写中文含义如下:The Chinese meanings of English abbreviations in this application are as follows:

DBN:深度信念网络预测模型;DBN: Deep Belief Network prediction model;

RBM:受限玻尔兹曼机;RBM: Restricted Boltzmann Machine;

GB-RBM:高斯-伯努利受限玻尔兹曼机;GB-RBM: Gauss-Bernoulli Restricted Boltzmann Machine;

BB-RBM:伯努利-伯努利受限玻尔兹曼机;BB-RBM: Bernoulli-Bernoulli Restricted Boltzmann Machine;

AE:自编码神经网络;AE: self-encoding neural network;

SAE:稀疏自编码神经网络;SAE: Sparse Autoencoder Neural Network;

SVM:支持向量机;SVM: support vector machine;

BP:反向传播算法;BP: backpropagation algorithm;

S-BP:基于梯度下降的标准误差反向传播算法;S-BP: standard error backpropagation algorithm based on gradient descent;

I-BP:在权值更新法则上增加冲量项的误差反向传播算法;I-BP: An error backpropagation algorithm that adds an impulse item to the weight update rule;

Gibbs:吉布斯采样;Gibbs: Gibbs sampling;

MAPE:平均绝对百分比误差。MAPE: Mean Absolute Percentage Error.

请参见图1至图7,一种基于深度信念网络的电力负荷预测方法,采用稀疏自编码神经网络对电力负荷的历史数据进行聚合处理;基于受限玻尔兹曼机,构建复合优化的深度信念网络预测模型;所述深度信念网络预测模型,其从输入至输出依次包括:高斯-伯努利受限玻尔兹曼机、伯努利-伯努利受限玻尔兹曼机和线性回归输出层,即,高斯-伯努利受限玻尔兹曼机的输出作为伯努利-伯努利受限玻尔兹曼机的输入,伯努利-伯努利受限玻尔兹曼机的输出作为线性回归输出层的输入,输入的数据依次经过高斯-伯努利受限玻尔兹曼机和伯努利-伯努利受限玻尔兹曼机,由线性回归输出层输出。其先采用无监督训练方法进行预训练,再采用加冲量项的BP算法进行参数微调,即先采用无监督训练方法对所述深度信念网络预测模型进行预训练,再采用加冲量项的BP算法对所述深度信念网络预测模型进行参数微调;最后将聚合处理后的历史数据输入所述深度信念网络预测模型中进行预测,得到电力系统负荷的预测值。Please refer to Figures 1 to 7, a power load forecasting method based on a deep belief network, which uses a sparse autoencoder neural network to aggregate historical data of power loads; based on a restricted Boltzmann machine, constructs a composite optimization depth Belief network prediction model; the deep belief network prediction model, which includes in turn from input to output: Gauss-Bernoulli restricted Boltzmann machine, Bernoulli-Bernoulli restricted Boltzmann machine and linear The regression output layer, i.e., the output of the Gauss-Bernoulli Restricted Boltzmann Machine as the input of the Bernoulli-Bernoulli Restricted Boltzmann Machine, Bernoulli-Bernoulli Restricted Boltzmann The output of the Mann machine is used as the input of the linear regression output layer, and the input data sequentially passes through the Gauss-Bernoulli restricted Boltzmann machine and the Bernoulli-Bernoulli restricted Boltzmann machine, and the linear regression output layer output. It first uses the unsupervised training method for pre-training, and then uses the BP algorithm with the impulse item to fine-tune the parameters, that is, first uses the unsupervised training method to pre-train the prediction model of the deep belief network, and then uses the BP algorithm with the impulse item Fine-tuning the parameters of the deep belief network prediction model; finally, inputting the aggregated historical data into the deep belief network prediction model for prediction, and obtaining the predicted value of the power system load.

其先采用无监督训练方法进行预训练时,对训练数据进行Gibbs采样处理,并采用对比散度CD-k算法,加速深度信念网络预测模型的训练过程。训练时可分别对高斯-伯努利受限玻尔兹曼机和伯努利-伯努利受限玻尔兹曼机采用无监督训练方法进行预训练,训练时,可分别对训练数据进行Gibbs采样处理,并采用对比散度CD-k算法,加速高斯-伯努利受限玻尔兹曼机或伯努利-伯努利受限玻尔兹曼机的训练过程。When using the unsupervised training method for pre-training, Gibbs sampling is performed on the training data, and the contrastive divergence CD-k algorithm is used to accelerate the training process of the deep belief network prediction model. During training, Gauss-Bernoulli Restricted Boltzmann Machine and Bernoulli-Bernoulli Restricted Boltzmann Machine can be pre-trained using unsupervised training methods. During training, the training data can be respectively Gibbs sampling processing, and using the contrastive divergence CD-k algorithm to accelerate the training process of Gauss-Bernoulli Restricted Boltzmann Machine or Bernoulli-Bernoulli Restricted Boltzmann Machine.

电力负荷的历史数据采集时可综合考虑日期、天气以及需求侧管理信息等因素,天气因素可包括温度、降水量、风速和太阳辐射;日类型因素可包括节假日、工作日,需求侧管理信息因素可包括电量数据及分时电价等。对每一种数据类型进行详细划分,形成输入特征向量。Factors such as date, weather, and demand-side management information can be considered comprehensively when collecting historical data of electric loads. Weather factors can include temperature, precipitation, wind speed, and solar radiation; day-type factors can include holidays, working days, and demand-side management information factors It can include electricity data and time-of-use electricity price, etc. Each data type is divided in detail to form an input feature vector.

在采用稀疏自编码神经网络对数据进行聚合处理前,可先对电力负荷的历史数据进行归一化预处理。Before using the sparse self-encoder neural network to aggregate the data, the historical data of the electric load can be normalized and preprocessed.

所述稀疏自编码神经网络可采用两层或三层神经网络;其中,两层神经网络包括输入层和输出层,三层神经网络包括输入层、隐藏层及输出层。The sparse self-encoder neural network can be a two-layer or three-layer neural network; wherein, the two-layer neural network includes an input layer and an output layer, and the three-layer neural network includes an input layer, a hidden layer, and an output layer.

所述高斯-伯努利受限玻尔兹曼机和所述伯努利-伯努利受限玻尔兹曼机可均采用两层构建,两层可包括可见层和隐藏层,两层之间的单元可互相连接,同层单元两两之间可不存在连接。Both the Gauss-Bernoulli Restricted Boltzmann Machine and the Bernoulli-Bernoulli Restricted Boltzmann Machine can be constructed using two layers, the two layers can include a visible layer and a hidden layer, and the two layers The units in between can be connected to each other, and there is no connection between any two units on the same layer.

本发明还提供一种基于深度信念网络的电力负荷预测系统的实施例,包括数据预处理单元和深度信念网络预测模型单元;其中:所述数据预处理单元,其包括稀疏自编码神经网络子单元,所述稀疏自编码神经网络子单元输入电力负荷的历史数据并进行聚合处理;所述深度信念网络预测模型单元,其从输入至输出依次包括:高斯-伯努利受限玻尔兹曼机、伯努利-伯努利受限玻尔兹曼机和线性回归输出层,即,高斯-伯努利受限玻尔兹曼机的输出作为伯努利-伯努利受限玻尔兹曼机的输入,伯努利-伯努利受限玻尔兹曼机的输出作为线性回归输出层的输入,输入的数据依次经过高斯-伯努利受限玻尔兹曼机和伯努利-伯努利受限玻尔兹曼机,由线性回归输出层输出。其先采用无监督训练模式进行预训练,再采用加冲量项的BP算法进行参数微调,其输入所述数据预处理单元处理后的历史数据,其输出电力负荷的预测值。即先采用无监督训练方法对所述深度信念网络预测模型进行预训练,再采用加冲量项的BP算法对所述深度信念网络预测模型进行参数微调;最后将聚合处理后的历史数据输入所述深度信念网络预测模型中进行预测,得到电力系统负荷的预测值。The present invention also provides an embodiment of a power load forecasting system based on a deep belief network, including a data preprocessing unit and a deep belief network forecasting model unit; wherein: the data preprocessing unit includes a sparse self-encoding neural network subunit , the sparse self-encoder neural network subunit inputs the historical data of electric load and performs aggregation processing; the deep belief network prediction model unit, which sequentially includes from input to output: Gauss-Bernoulli restricted Boltzmann machine , a Bernoulli-Bernoulli Restricted Boltzmann Machine and a linear regression output layer, i.e., the output of the Gauss-Bernoulli Restricted Boltzmann Machine as a Bernoulli-Bernoulli Restricted Boltzmann The input of the Mann machine, the output of the Bernoulli-Bernoulli restricted Boltzmann machine is used as the input of the linear regression output layer, and the input data is sequentially passed through the Gauss-Bernoulli restricted Boltzmann machine and Bernoulli - Bernoulli Restricted Boltzmann Machine, output by the linear regression output layer. It first uses the unsupervised training mode for pre-training, and then uses the BP algorithm with impulse items for parameter fine-tuning, which inputs the historical data processed by the data pre-processing unit, and outputs the predicted value of the electric load. That is, the unsupervised training method is used to pre-train the deep belief network prediction model, and then the BP algorithm with the impulse item is used to fine-tune the parameters of the deep belief network prediction model; finally, the aggregated historical data is input into the Prediction is carried out in the deep belief network prediction model, and the predicted value of the power system load is obtained.

进一步地,所述数据预处理单元还可包括归一化预处理子单元;所述归一化预处理子单元对电力负荷的历史数据进行归一化预处理,其处理后的数据输出至所述稀疏自编码神经网络子单元。Further, the data preprocessing unit may also include a normalized preprocessing subunit; the normalized preprocessing subunit performs normalized preprocessing on the historical data of the electric load, and the processed data is output to the The sparse self-encoder neural network subunit is described.

进一步地,所述稀疏自编码神经网络子单元可包括两层或三层神经网络;其中,两层神经网络包括输入层和输出层,三层神经网络包括输入层、隐藏层及输出层。Further, the sparse self-encoder neural network subunit may include a two-layer or three-layer neural network; wherein, the two-layer neural network includes an input layer and an output layer, and the three-layer neural network includes an input layer, a hidden layer, and an output layer.

进一步地,所述高斯-伯努利受限玻尔兹曼机和所述伯努利-伯努利受限玻尔兹曼机可均包括两层,可分别为隐藏层和可见层,两层之间的单元可互相连接,同层单元两两之间可不存在连接。Further, the Gauss-Bernoulli Restricted Boltzmann Machine and the Bernoulli-Bernoulli Restricted Boltzmann Machine may both include two layers, which may be a hidden layer and a visible layer respectively, and the two The units between layers can be connected to each other, and there is no connection between any two units of the same layer.

进一步地,所述深度信念网络预测模型单元在训练时,所述高斯-伯努利受限玻尔兹曼机和所述伯努利-伯努利受限玻尔兹曼机可各自分别堆叠一层临时输出层,然后各自可采用无监督训练模式进行预训练,再可采用加冲量项的BP算法进行参数微调。Further, when the deep belief network prediction model unit is being trained, the Gauss-Bernoulli Restricted Boltzmann Machine and the Bernoulli-Bernoulli Restricted Boltzmann Machine can be stacked separately One layer of temporary output layer, and then each can use the unsupervised training mode for pre-training, and then use the BP algorithm with the impulse item to fine-tune the parameters.

下面结合本发明的一个优选实施例说明本发明的工作原理:The working principle of the present invention is illustrated below in conjunction with a preferred embodiment of the present invention:

一、选择负荷数据及采用SAE对数据进行聚合1. Select load data and use SAE to aggregate data

为使预测结果更加准确,考虑多种影响负荷的因素,包括日期属性、天气数据、峰谷分时电价等需求侧管理信息,并将每一种数据类型详细划分,从而形成电力负荷预测模型的输入特征向量。然后,将特征向量放入自编码器中进行特征聚合,以实现对非线性数据的有效聚合。In order to make the prediction results more accurate, a variety of factors affecting the load are considered, including demand-side management information such as date attributes, weather data, peak and valley time-of-use electricity prices, and each data type is divided in detail to form the power load forecasting model. Input feature vector. Then, the feature vectors are put into the autoencoder for feature aggregation to achieve efficient aggregation for nonlinear data.

S1、对负荷数据的选择S1. Selection of load data

数据包括四组主要的测量变量:天气数据(温度、降水量、风速和太阳辐射),日类型数据,电量数据及分时电价。The data includes four main groups of measurement variables: weather data (temperature, precipitation, wind speed and solar radiation), day type data, electricity data and time-of-use electricity price.

①数据选择① Data selection

非节假日:可选取待预测日前若干天非节假日的历史数据作为训练样本集。节假日:可采用常用的灰色关联投影法(Algorithm 1)选取待预测日的相似日的数据。Non-holidays: The historical data of non-holidays for several days before the forecast date can be selected as the training sample set. Holidays: The commonly used gray relational projection method (Algorithm 1) can be used to select data on similar days to the day to be predicted.

计算Y0j与Yij间的关联度:Calculate the correlation between Y 0j and Y ij :

其中λ为分辨系数。Where λ is the resolution coefficient.

计算各影响因素所占的权重:Calculate the weight of each influencing factor:

②数据预处理② Data preprocessing

数据归一化的目标是把有量纲数据变为无量纲数据,并将数据映射到0~1范围之内(Fig.5),这样可以提升预测模型的收敛速度。The goal of data normalization is to change dimensioned data into dimensionless data and map the data to the range of 0 to 1 (Fig.5), which can improve the convergence speed of the prediction model.

其中,Xi为样本数据,为Xi的归一化值,Xmax和Xmin分别是Xi的最大值和最小值;Among them, Xi is the sample data, is the normalized value of Xi, X max and X min are the maximum and minimum values of Xi respectively;

S2、采用SAE对数据进行聚合S2. Use SAE to aggregate data

自编码神经网络(AE)是一种三层的无监督神经网络,可以采取输入向量通过非线性映射在下一层中形成一个高级别概念。AE尝试近似相同的函数,因此使目标输出值接近输入值从而最小化预期的重建误差。图2是AE的基本架构。第一层是输入层,中间层是隐藏层,最后一层是输出层。AE网络可以通过激活功能进行从前一层到下一层的非线性转换。Autoencoder neural network (AE) is a three-layer unsupervised neural network that can take input vectors to form a high-level concept in the next layer through nonlinear mapping. AE tries to approximate the same function, thus bringing the target output value close to the input value thus minimizing the expected reconstruction error. Figure 2 is the basic architecture of AE. The first layer is the input layer, the middle layer is the hidden layer, and the last layer is the output layer. The AE network can perform non-linear transformation from the previous layer to the next layer through the activation function.

AE网络的学习过程包括两个阶段:编码器和解码器阶段。The learning process of the AE network consists of two stages: encoder and decoder stages.

编码器将输入转化为更抽象的特征向量,解码器从特征向量重建输入。编码器是前向传播过程。编码器需要训练{x(1),x(2),...,x(i)},x(i)∈Zn非线性映射到隐藏层通过如下的sigmoid函数f(z):The encoder transforms the input into a more abstract feature vector, and the decoder reconstructs the input from the feature vector. The encoder is a forward pass process. The encoder needs to be trained {x (1) ,x (2) ,...,x (i) }, x (i) ∈ Z n is nonlinearly mapped to the hidden layer through the following sigmoid function f(z):

解码过程是重建输出中的输入层。重建的矢量{y(1),y(2),...,y(i)}可以由下式给出:The decoding process is to reconstruct the input layer in the output. The reconstructed vector {y (1) ,y (2) ,...,y (i) } can be given by:

其中W是这些不同层之间的权重向量,b是偏移量。{W,b}是编码器和解码器中可训练的参数。where W is the weight vector between these different layers and b is the offset. {W,b} are trainable parameters in the encoder and decoder.

在网络中,如果在隐藏层中节点输出为零或接近零,节点被认为是“不活跃的”,而当输出为1或接近1时,该节点被视为“活跃的”。为使隐藏层稀疏,我们需要使大部分节点在这个隐藏层中不活动。因此,稀疏性约束被强加于隐藏节点,公式如下:In a network, a node is considered "inactive" if its output is zero or close to zero in the hidden layer, and "active" when the output is 1 or close to 1. To make a hidden layer sparse, we need to make most of the nodes inactive in this hidden layer. Therefore, a sparsity constraint is imposed on the hidden nodes with the following formula:

其中是隐藏节点的平均激活值,变量m是训练次数,[aj(x(i))]表示第j层隐藏节点与第i层样品的活化值。in is the average activation value of hidden nodes, the variable m is the number of training times, and [a j (x (i) )] represents the activation value of hidden nodes in layer j and samples in layer i.

对于训练集,为了避免学习相同功能并提高捕获重要信息的能力,需要设置每个隐藏节点j的平均激活为零或接近零。因此,额外的惩罚因子是:For the training set, in order to avoid learning the same function and improve the ability to capture important information, it is necessary to set the average activation of each hidden node j to zero or close to zero. Therefore, the additional penalty factor is:

其中ρ是稀疏参数,而KL(·)是Kullback-Leibler散度,被用来充当期望的和实际的分布的惩罚措施。如果接近0或1时,KL-散度接近无穷。SAE成本函数可以由下式确定:where ρ is the sparsity parameter, and KL( ) is the Kullback-Leibler divergence, which is used as a penalty measure for the desired and actual distribution. if but when As it approaches 0 or 1, the KL-divergence approaches infinity. The SAE cost function can be determined by the following formula:

其中J(W,b)是损失函数,旨在使产生输出编码器尽可能与输入等效。可以使用梯度下降算法来更新参数(W,b)。where J(W,b) is a loss function that aims to make the encoder that produces the output as equivalent to the input as possible. The parameters (W, b) can be updated using a gradient descent algorithm.

二、构建深度信念网络模型,本发明的深度信念网络模型是一种复合优化的改进的深度信念网络模型,以下简称改进的DBN。2. Constructing a deep belief network model. The deep belief network model of the present invention is an improved deep belief network model of compound optimization, hereinafter referred to as the improved DBN.

RBM简介:受限玻尔兹曼机(RBM)是一个随机神经网络(即当网络的神经元节点被激活时会有随机行为,随机取值)。它包含一层可视层和一层隐藏层。在同一层的神经元之间是相互独立的,而在不同的网络层之间的神经元是相互连接的(双向连接)。在网络进行训练以及使用时信息会在两个方向上流动,而且两个方向上的权值是相同的。但是偏置值是不同的(偏置值的个数是和神经元的个数相同的),上面一层神经元组成隐藏层(hiddenlayer),用h向量隐藏层神经元的值。下面一层的神经元组成可见层(visible layer),用v向量表示可见层神经元的值。连接权重可以用矩阵W表示。和DNN的区别是,RBM不区分前向和反向,可见层的状态可以作用于隐藏层,而隐藏层的状态也可以作用于可见层。隐藏层的偏倚系数是向量b,而可见层的偏倚系数是向量a。Introduction to RBM: Restricted Boltzmann Machine (RBM) is a random neural network (that is, when the neuron nodes of the network are activated, there will be random behavior and random values). It consists of a visible layer and a hidden layer. Neurons in the same layer are independent of each other, while neurons in different network layers are connected to each other (bidirectional connection). When the network is trained and used, information will flow in two directions, and the weights in both directions are the same. But the offset values are different (the number of offset values is the same as the number of neurons), the upper layer of neurons constitutes a hidden layer (hidden layer), and the value of the hidden layer neurons is used as h vector. The neurons in the lower layer form the visible layer, and the v vector represents the value of the neurons in the visible layer. The connection weights can be represented by a matrix W. The difference with DNN is that RBM does not distinguish between forward and reverse, the state of the visible layer can act on the hidden layer, and the state of the hidden layer can also act on the visible layer. The bias coefficient of the hidden layer is the vector b, while the bias coefficient of the visible layer is the vector a.

RBM模型参数:主要是权重矩阵w,偏倚系数向量a和b,隐藏层神经元状态向量h和可见层神经元状态向量v。RBM model parameters: mainly weight matrix w, bias coefficient vectors a and b, hidden layer neuron state vector h and visible layer neuron state vector v.

本方法中DBN由BB-RBM和GB-RBM两种RBM组成:。In this method, the DBN is composed of two RBMs: BB-RBM and GB-RBM:.

S3、构建BB-RBM模型S3. Construct BB-RBM model

对于伯努利RBM(Bernoulli-Bernoulli RBM,BB-RBM),可见单元和隐藏单元均为二进制随机单元。其能量函数为:For Bernoulli RBM (Bernoulli-Bernoulli RBM, BB-RBM), both visible and hidden units are binary random units. Its energy function is:

E(v,h)=-aTv-bTh-vTwh (式9)E(v,h)=-a T vb T hv T wh (Formula 9)

其中,权重矩阵W,偏倚系数向量a和b,隐藏层神经元状态向量h和可见层神经元状态向量v,w、a、b、h及v是RBM的参数。从Fig.5我们可以看出模型被分为两组单元:v和h,他们的偏置对应a和b,他们之间的相互作用通过w描述。Among them, weight matrix W, bias coefficient vectors a and b, hidden layer neuron state vector h and visible layer neuron state vector v, w, a, b, h and v are the parameters of RBM. From Fig.5 we can see that the model is divided into two groups of units: v and h, their biases correspond to a and b, and the interaction between them is described by w.

模型的联合概率分布根据能量函数定义为:The joint probability distribution of the model is defined in terms of the energy function as:

式中,Z=∑v,he-E(v,h)为配分函数。In the formula, Z=∑v ,he - E(v,h) is the partition function.

“限制”是指RBM模型的同类节点之间不存在连接,这代表隐藏层单元(或者可见单元)之间条件独立性成立。在BB-RBM中,所有单位都是二进制随机单元,这意味着输入数据应该是二进制的,或者在0和1之间的实数值表示可见单元活跃或不活跃的概率。"Restriction" means that there is no connection between similar nodes of the RBM model, which means that the conditional independence between hidden layer units (or visible units) is established. In BB-RBM, all units are binary random units, which means that the input data should be binary, or a real value between 0 and 1 representing the probability that a visible unit is active or inactive.

每个单位的条件概率分布由其接收的输入的sigmoid函数给出:The conditional probability distribution for each unit is given by the sigmoid function of the inputs it receives:

式中,σ(x)=1/(1+exp(-x))为sigmoid激活函数。In the formula, σ(x)=1/(1+exp(-x)) is the sigmoid activation function.

S4、构建GB-RBM模型S4, build GB-RBM model

与BB-RBM不同的是,GB-RBM的能量函数被定义为:Different from BB-RBM, the energy function of GB-RBM is defined as:

其中,σ为v的高斯噪声的标准差。where σ is the standard deviation of the Gaussian noise of v.

模型的条件概率分布和BB-RBM相同,联合概率分布:The conditional probability distribution of the model is the same as that of BB-RBM, and the joint probability distribution:

式中,vi取实值,服从均值为μ,方差为σ的高斯分布。In the formula, v i takes a real value and obeys a Gaussian distribution with mean μ and variance σ.

S5、Gibbs采样S5, Gibbs sampling

基于上述模型需要训练RBM即调整参数θ,以拟和给定的训练样本。最大似然估计是一种常用的模型参数的估计方法,应用到本文中,就是寻找参数θ使得所有训练数据x在该分布下的概率最大,因此可以将RBM的训练问题转化为求解似然函数的最值问题。Based on the above model, it is necessary to train the RBM, that is, to adjust the parameter θ to fit the given training samples. Maximum likelihood estimation is a commonly used method for estimating model parameters. When applied in this paper, it is to find the parameter θ to maximize the probability of all training data x under this distribution, so the RBM training problem can be transformed into solving the likelihood function most value problem.

给定一个训练集,对于每个训练样本的模型对数似然可以表示为:Given a training set, the model log-likelihood for each training sample can be expressed as:

其中θ={b、a、W},D是训练数据集。where θ = {b, a, W}, D is the training dataset.

其梯度可表示为:Its gradient can be expressed as:

其中EP*是经验分布P*下的x期望值,EP是模型分布P下的期望值。where E P* is the expected value of x under the empirical distribution P * , and E P is the expected value under the model distribution P.

虽然EP*[x·h]可以根据训练数据很容易地计算出来;但是EP[x·h]对应着v和h的所有可能性取值,需要遍历可见单位和隐性单位的所有可能的数值组合,且组合数目呈指数关系,因此很难直接计算得到。通常可以通过Gibbs采样(如图3)获得EP的无偏样本用于估计期望。Although E P* [x h] can be easily calculated according to the training data; but E P [x h] corresponds to all possible values of v and h, and it is necessary to traverse all possible values of visible units and hidden units The numerical combination of , and the number of combinations is exponential, so it is difficult to calculate directly. Unbiased samples of E P can usually be obtained by Gibbs sampling (as shown in Figure 3) for estimating expectations.

S6、CD-k算法S6, CD-k algorithm

为了加速RBM的训练过程,采用对比散度CD-k算法进行无监督学习。由于CD-k算法中(k表示采样次数),当k=1时,即只进行一步吉布斯采样,就能达到很好的拟合效果。故一般采用CD-1算法的形式,来拟合各参数的值。In order to speed up the training process of RBM, the contrastive divergence CD-k algorithm is used for unsupervised learning. Because in the CD-k algorithm (k represents the number of sampling times), when k=1, that is, only one step of Gibbs sampling is performed, and a good fitting effect can be achieved. Therefore, the CD-1 algorithm is generally used to fit the values of each parameter.

W←W+ε×[p(h=1|v)vT-p(h*=1|v*)v*T]W←W+ε×[p(h=1|v)v T -p(h * =1|v * )v *T ]

b←b+ε×(v-v*)b←b+ε×(vv * )

c←c+ε×[p(h=1|v)-p(h*=1|v*)] (式17)c←c+ε×[p(h=1|v)-p(h * =1|v * )] (Formula 17)

式中可视层v的重构为v*,根据重构的可视层v*所得隐藏层为h*。设学习效率为ε,经过对比散度算法对RBM进行训练后,权重矩阵W、可视层的偏置向量b、隐藏层的偏置向量c。In the formula, the reconstruction of the visible layer v is v * , and the hidden layer obtained according to the reconstructed visible layer v * is h * . Assuming the learning efficiency is ε, after the RBM is trained by the contrastive divergence algorithm, the weight matrix W, the bias vector b of the visible layer, and the bias vector c of the hidden layer.

S7、构建改进的DBN模型S7, build an improved DBN model

将DBN应用于复杂因素下的负荷预测问题中,其关键问题在于,如何构造合适的预测模型和如何对构造的预测模型进行有效的训练。本发明实施例中改进了图4所示的DBN模型,以适应于解决短期电力负荷预测问题。When DBN is applied to load forecasting under complex factors, the key issues are how to construct a suitable forecasting model and how to effectively train the constructed forecasting model. In the embodiment of the present invention, the DBN model shown in FIG. 4 is improved to be suitable for solving the problem of short-term power load forecasting.

改进的DBN模型可由一个GB-RBM、多个BB-RBM和一个线性回归输出层构成:(1)将GB-RBM作为堆叠组成DBN的第一个RBM,以便将输入数据中的天气数据和负荷数据等连续型实值数据有效的转化成二进制数据;(2)因为BB-RBM适用于处理二进制数据(如黑白图像或编码后的文本)的建模过程,所以其他RBM均采用BB-RBM,实现对输入数据的特征提取;(3)最后一个RBM的隐藏层和输出层构成线性回归网络结构,将改进的DBN提取的特征向量作为输入,通过线性激活函数处理得到时间间隔可以是15分钟,30分钟或1小时的电力负荷时间序列y。The improved DBN model can be composed of a GB-RBM, multiple BB-RBMs and a linear regression output layer: (1) The GB-RBM is used as the first RBM of the DBN to be stacked, so that the weather data and load in the input data Data and other continuous real-valued data can be effectively converted into binary data; (2) Because BB-RBM is suitable for the modeling process of binary data (such as black and white images or encoded text), other RBMs use BB-RBM, Realize the feature extraction of input data; (3) The hidden layer and output layer of the last RBM form a linear regression network structure, and the feature vector extracted by the improved DBN is used as input, and the time interval obtained by processing the linear activation function can be 15 minutes. 30-minute or 1-hour electrical load time series y.

三、采用无监督训练对模型进行预训练并采用BP算法进行参数微调3. Use unsupervised training to pre-train the model and use BP algorithm to fine-tune the parameters

采用无监督学习对DBN进行预训练,有效完成了语音识别的分类任务,同时为下一步的参数微调提供更优的参数基础。在混合预训练的过程中,待训练的BB-RBM和GB-RBM的上面需要堆叠一层临时输出层,保证预测模型的完整性。Unsupervised learning is used to pre-train DBN, which effectively completes the classification task of speech recognition, and provides a better parameter basis for the next step of parameter fine-tuning. In the process of hybrid pre-training, a temporary output layer needs to be stacked on top of the BB-RBM and GB-RBM to be trained to ensure the integrity of the prediction model.

S8、无监督预训练S8, unsupervised pre-training

将无监督学习应用到深度信念网络中。采用稀疏自编码神经网络即稀疏自动编码器模型作为深度学习中数据的预处理工具.稀疏自动编码器模型对稀疏自编码参数进行训练,在稀疏自编码模型中,寻找重构数据使其接近原始数据a,即:Applying Unsupervised Learning to Deep Belief Networks. The sparse autoencoder model is used as a preprocessing tool for data in deep learning. The sparse autoencoder model trains the sparse autoencoder parameters. In the sparse autoencoder model, the reconstructed data is searched. make it close to the original data a, that is:

抽取M个训练样本,计算重构误差函数C:Extract M training samples and calculate the reconstruction error function C:

其中,y为系数向量,w为训练样本的权重系数.通过优化公式:Among them, y is the coefficient vector, w is the weight coefficient of the training sample. Through the optimization formula:

来得到系数y和样本的权重系数w。To get the coefficient y and the weight coefficient w of the sample.

其中,为稀疏性惩罚因子.in, is the sparsity penalty factor.

S9、采用BP算法进行参数微调S9. Use BP algorithm to fine-tune parameters

在逐层混合预训练为改进的DBN模型提供了良好的网络参数之后,本文采用BP算法进行全局参数微调。BP神经网络是一种多层前馈仿生算法,具有很好的信息的顺向传输与误差的反向传播特点。通过不断地重复周期以达到所期望的误差,最后经过训练得到符合期望的模型。但由于BP神经网络存在学习速度较慢、精度不高等问题,故采用改进反向传播算法收敛速度的措施,即附加冲量项,即:After the layer-by-layer hybrid pre-training provides good network parameters for the improved DBN model, this paper adopts the BP algorithm for global parameter fine-tuning. BP neural network is a multi-layer feed-forward bionic algorithm, which has good characteristics of forward transmission of information and back propagation of errors. By repeating the cycle continuously to achieve the desired error, the desired model is finally obtained after training. However, due to the problems of slow learning speed and low precision in the BP neural network, measures to improve the convergence speed of the backpropagation algorithm are adopted, that is, additional impulse items, namely:

Δωij(n+1)=ηδjοi+αΔωij(n) (0<α<1) (式21)Δω ij (n+1)=ηδ j ο i +αΔω ij (n) (0<α<1) (Formula 21)

式中: In the formula:

四、实验设置4. Experimental settings

负荷测试实验样本数据以中国某地区2016年1月至2017年12月的实际负荷数据为基础,数据包括四组主要的测量变量:天气数据(温度,降水量,风速和太阳辐射)、日类型数据、电量数据和分时电价数据(高峰时段为7:00~11:00,19:00~23:00;平常时段为11:00~19:00;低谷时段为23:00~次日7:00)。天气数据源自气象网站,采集频率为1h,负荷数据选用与天气数据对应的每小时的数据进行分析。The load test experimental sample data is based on the actual load data of a certain area in China from January 2016 to December 2017. The data includes four main measurement variables: weather data (temperature, precipitation, wind speed and solar radiation), day type Data, electricity data and time-of-use electricity price data (peak hours are 7:00-11:00, 19:00-23:00; normal hours are 11:00-19:00; low-peak hours are 23:00-7 :00). The weather data comes from the meteorological website, and the collection frequency is 1h. The load data is analyzed by hourly data corresponding to the weather data.

为了比较算法性能,在DBN的参数微调最常用的方法中,选用了基于梯度下降的标准BP算法(简称S-BP)和在权值更新法则上增加冲量项的BP算法(简称I-BP)进行对比。在预训练的基础上设置了、S-BP和I-BP两种BP算法进行微调,形成了两种不同优化策略“混合训练+I-BP微调”和“混合训练+S-BP微调”。In order to compare the performance of the algorithm, among the most commonly used methods for parameter fine-tuning of DBN, the standard BP algorithm based on gradient descent (S-BP for short) and the BP algorithm (I-BP for short) with an impulse item added to the weight update rule are selected. comparing. On the basis of pre-training, two BP algorithms, S-BP and I-BP, are set up for fine-tuning, forming two different optimization strategies "mixed training + I-BP fine-tuning" and "mixed training + S-BP fine-tuning".

“混合训练+S-BP微调”的优化策略的预测结果波动性最大,这是可能是由于S-BP算法对于多层网络参数寻优易导致损失函数收敛到局部最优;“混合训练+I-BP微调”较为平稳,这是可能是由于I-BP算法加入了冲量项在一定程度上起到加大搜索步长的效果,从而越过某些狭窄的局部极小值,达到更小的地方。The prediction result of the optimization strategy of "mixed training + S-BP fine-tuning" fluctuates the most, which may be due to the fact that the S-BP algorithm tends to cause the loss function to converge to a local optimum for the optimization of multi-layer network parameters; "mixed training + I -BP fine-tuning" is relatively stable. This may be due to the fact that the I-BP algorithm adds an impulse item to a certain extent to increase the search step size, thereby crossing some narrow local minima and reaching smaller places. .

模型的评估和比较采用平均绝对百分比误差(MAPE)来衡量,由于MAPE的稳定性好,可以将其作为多种评估标准的基准,计算如下:The evaluation and comparison of the models are measured by the mean absolute percentage error (MAPE). Due to the good stability of the MAPE, it can be used as a benchmark for various evaluation standards. The calculation is as follows:

其中,N代表测量负荷的样本数,yl(k)和分别表示为第k天的第1小时的量测负荷和预测负荷。where N represents the number of samples to measure the load, y l (k) and are respectively expressed as the measured load and predicted load of the first hour of the k-th day.

五、性能评价5. Performance evaluation

(1)网络结构对预测模型的影响(1) The influence of network structure on the prediction model

本文构建改进的DBN模型进行短期电力负荷预测,是对GB-RBM处理实值数据的能力的信任。图5展示了改进的DBN模型与模型的第一个RBM采用BB-RBM的DBN(B-DBN)模型的对比试验结果。两种模型均进行预训练和BP微调进行模型参数寻优。This paper builds an improved DBN model for short-term power load forecasting, which is the trust in the ability of GB-RBM to process real-valued data. Fig. 5 shows the comparative test results of the improved DBN model and the DBN (B-DBN) model of the model's first RBM adopting BB-RBM. Both models are pre-trained and BP fine-tuned for model parameter optimization.

图5为本发明的一种深度信念网络预测模型与采用两个BB-RBM的DBN模型的对比试验结果曲线图;本发明的一种深度信念网络预测模型是一种改进的DBN,在图中用G-DBN代表本发明的一种深度信念网络预测模型;用B-DBN代表采用两个BB-RBM的DBN模型;图5是G-DBN与B-DBN两种模型预测结果的比较。B-DBN预测效果的不稳定性是由于BB-RBM在处理实值数据时易于产生噪声的事实。相反,G-DBN的预测效果更好。Fig. 5 is a kind of deep belief network predictive model of the present invention and adopts the comparative test result curve of the DBN model of two BB-RBMs; A kind of deep belief network predictive model of the present invention is a kind of improved DBN, in the figure G-DBN is used to represent a deep belief network prediction model of the present invention; B-DBN is used to represent the DBN model using two BB-RBMs; Figure 5 is a comparison of the prediction results of the two models of G-DBN and B-DBN. The instability of B-DBN's prediction performance is due to the fact that BB-RBM is prone to noise when dealing with real-valued data. On the contrary, the prediction performance of G-DBN is better.

(2)不同预测方法比较(2) Comparison of different forecasting methods

为了进一步验证本实施例的可行性,对某地区2017年四季的负荷进行了分别预测,训练样本集和测试样本集由待预测日前10个月的历史数据构成。选择常用的人工智能预测方法:BP神经网络、SVM方法和传统DBN方法(无监督学习预训练和S-BP算法微调)进行对比。为了保证客观性,实验结果均为执行100次实验得到的平均值。In order to further verify the feasibility of this embodiment, the loads of a certain region in the four seasons of 2017 are forecasted separately. The training sample set and the test sample set are composed of historical data of 10 months before the forecast date. Choose commonly used artificial intelligence prediction methods: BP neural network, SVM method and traditional DBN method (unsupervised learning pre-training and S-BP algorithm fine-tuning) for comparison. In order to ensure objectivity, the experimental results are the average value obtained by performing 100 experiments.

请参考图6,在图6中用G-DBN代表本发明的一种深度信念网络预测模型,用DBN代表传统的DBN模型;对不同方法的预测进行比较,本发明的一种深度信念网络预测模型的预测误差:四季的平均值为3.59%MAPE,小于其他三种方法。考虑到温度,光照强度和使用时间电价的影响,改进的DBN可以更充分地采用多种影响因素和电力负荷之间的复杂关系。Please refer to Fig. 6, in Fig. 6, represent a kind of deep belief network prediction model of the present invention with G-DBN, represent traditional DBN model with DBN; The prediction of different methods is compared, a kind of deep belief network prediction of the present invention The prediction error of the model: the average value of the four seasons is 3.59% MAPE, which is smaller than the other three methods. Considering the influence of temperature, light intensity and usage time electricity price, the improved DBN can more fully adopt the complex relationship between various influencing factors and electric load.

随着可再生能源发电(主要为光伏发电、风电)的渗透率日益提高,某些国家已有超过20%的年度电力需求来自风能和太阳能,有些地区部分小时的光伏发电、风电的出力甚至超过负荷的50%,电力系统运行中的波动性和不确定性问题更加突出。为了验证本发明方法的泛化性能,获取可再生能源发电出力占比为30%左右、20%左右、10%左右的三个不同地区的负荷作为对比试验的输入样本,并以MAPE作为评价指标。With the increasing penetration of renewable energy power generation (mainly photovoltaic power generation and wind power), more than 20% of the annual electricity demand in some countries comes from wind power and solar power, and the output of photovoltaic power generation and wind power in some hours in some areas even exceeds 50% of the load, the problem of fluctuation and uncertainty in the operation of the power system is more prominent. In order to verify the generalization performance of the method of the present invention, the loads of three different regions where the proportion of renewable energy power generation output is about 30%, about 20%, and about 10% are obtained as input samples for the comparison test, and MAPE is used as the evaluation index .

请参考图7,在图7中用G-DBN代表本发明的一种深度信念网络预测模型,用DBN代表传统的DBN模型。图7显示当可再生能源输出占比提高时,各方法的预测误差均会变大,BP和SVM变化较为明显,而传统的DBN模型和本发明的一种深度信念网络预测模型略有变化。这可能是因为随着可再生能源出力占比提高,电力系统运行更加不稳定,非线性负荷曲线更加复杂,深层网络拟合复杂非线性曲线的优势更加明显。Please refer to FIG. 7, in which G-DBN is used to represent a deep belief network prediction model of the present invention, and DBN is used to represent the traditional DBN model. Figure 7 shows that when the proportion of renewable energy output increases, the prediction errors of each method will increase, and the changes of BP and SVM are more obvious, while the traditional DBN model and a deep belief network prediction model of the present invention have slight changes. This may be because as the proportion of renewable energy output increases, the operation of the power system becomes more unstable, the nonlinear load curve becomes more complex, and the advantages of deep network fitting complex nonlinear curves are more obvious.

电力负荷预测是电力系统规划的重要组成部分,也是电力系统经济运行的基础,为配电网管理决策和运行方式提供了重要依据。本发明提出的一种基于深度信念网络的电力负荷预测方法,改进现有神经网络算法学习对历史数据采用的问题的同时提高了学习效率。仿真结果显示,相比于传统神经网络算法,基于深度信念网络的电力负荷预测方法的预测准确性有所提高。Power load forecasting is an important part of power system planning and the basis of power system economic operation, which provides an important basis for distribution network management decision-making and operation mode. A power load forecasting method based on a deep belief network proposed by the present invention improves the learning efficiency of the existing neural network algorithm learning and adopting historical data. The simulation results show that compared with the traditional neural network algorithm, the prediction accuracy of the electric load forecasting method based on the deep belief network is improved.

本发明还提供一种基于深度信念网络的电力负荷预测系统的优选实施例,其包括数据预处理单元和深度信念网络预测模型单元;其中:所述数据预处理单元,其包括归一化预处理子单元和稀疏自编码神经网络子单元,所述归一化预处理子单元对电力负荷的历史数据进行归一化预处理,其处理后的数据输入至所述稀疏自编码神经网络子单元,所述稀疏自编码神经网络子单元对输入的电力负荷的历史数据进行聚合处理;所述深度信念网络预测模型单元,其从输入至输出依次包括:高斯-伯努利受限玻尔兹曼机、伯努利-伯努利受限玻尔兹曼机和线性回归输出层,其先采用无监督训练模式进行预训练,再采用加冲量项的BP算法进行参数微调,其输入所述数据预处理单元处理后的历史数据,其输出电力负荷的预测值。The present invention also provides a preferred embodiment of a power load forecasting system based on a deep belief network, which includes a data preprocessing unit and a deep belief network prediction model unit; wherein: the data preprocessing unit includes normalized preprocessing A subunit and a sparse self-encoding neural network subunit, the normalized preprocessing subunit performs normalized preprocessing on the historical data of the electric load, and the processed data is input to the sparse self-encoding neural network subunit, The sparse self-encoder neural network subunit aggregates the historical data of the input power load; the deep belief network prediction model unit includes in sequence from input to output: Gauss-Bernoulli restricted Boltzmann machine , Bernoulli-Bernoulli Restricted Boltzmann Machine and Linear Regression Output Layer, it first uses the unsupervised training mode for pre-training, and then uses the BP algorithm with the impulse item to fine-tune the parameters, and it inputs the data pre-training The historical data processed by the processing unit outputs the predicted value of electric load.

数据预处理单元,综合考虑日期、天气、需求侧管理信息等各种影响负荷的因素,对每一种数据类型进行详细划分,形成电力负荷预测模型的输入特征向量;将所述特征向量特征输入到多个用于两层稀疏自编码神经网络中进行特征融合。The data preprocessing unit comprehensively considers various factors that affect the load such as date, weather, and demand-side management information, and divides each data type in detail to form the input feature vector of the power load forecasting model; input the feature vector feature To multiple two-layer sparse autoencoder neural networks for feature fusion.

深度信念网络预测模型单元;采用无监督训练对模型进行预训练,采用BP算法进行参数微调;由负荷数据训练得到输出数据,得到电力系统负荷的预测值。Deep belief network prediction model unit; unsupervised training is used to pre-train the model, and BP algorithm is used to fine-tune parameters; the output data is obtained from the load data training, and the predicted value of the power system load is obtained.

深度信念网络预测模型单元,包括:GB-RAM、BB-RAM和线性回归输出层,GB-RAM由隐藏层和可见层构成,两层之间单元互相连接,但同层单元两两之间不存在链接。BB-RAM由隐藏层和可见层构成,两层之间单元互相连接,但同层单元两两之间不存在链接。Deep belief network prediction model unit, including: GB-RAM, BB-RAM and linear regression output layer, GB-RAM is composed of hidden layer and visible layer, the units of the two layers are connected to each other, but the same layer units are not connected Link exists. BB-RAM consists of a hidden layer and a visible layer. The units in the two layers are connected to each other, but there is no link between the units in the same layer.

GB-RBM的能量函数为The energy function of GB-RBM is

其中,w、a和b是RBM的参数。v和h,他们的偏置对应a和b,他们之间的相互作用通过w描述,其中,σ为v的高斯噪声的标准差。Among them, w, a and b are the parameters of RBM. v and h, their biases correspond to a and b, and the interaction between them is described by w, where σ is the standard deviation of the Gaussian noise of v.

BB-RBM的能量函数为The energy function of BB-RBM is

E(v,h)=-aTv-bTh-vTwhE(v,h)=-a T vb T hv T wh

深度信念网络预测模型单元,具体用于采用无监督训练方法对模型进行训练,为深度信念网络预测模型单元的参数寻优,在寻优过程中,在GB-RAM、BB-RAM上面堆叠一层临时输出层,以保证预测模型的完整性;采用I-BP算法对深度信念网络预测模型单元进行全局微调,确定深度信念网络预测模型单元的拓扑结构。The deep belief network prediction model unit is specifically used to train the model using the unsupervised training method to optimize the parameters of the deep belief network prediction model unit. During the optimization process, a layer is stacked on top of GB-RAM and BB-RAM Temporary output layer to ensure the integrity of the prediction model; the I-BP algorithm is used to globally fine-tune the prediction model units of the deep belief network to determine the topology of the prediction model units of the deep belief network.

深度信念网络预测模型单元输出数据,成为电力系统负荷的预测值。The output data of the deep belief network prediction model unit becomes the predicted value of the power system load.

本发明采用自编码器对综合的历史数据进行聚合,采用多层受限玻尔兹曼机构成深度信念网络预测模型单元,并通过无监督训练模型,从而改善学习性能,提高预测精度。改进现有神经网络算法学习对历史数据进行分析的同时还提高了学习效率。仿真结果显示,相比于传统神经网络预测系统,本发明的基于深度信念网络的电力负荷预测系统的电力负荷预测准确性有所提高。The invention uses an autoencoder to aggregate comprehensive historical data, uses a multi-layer restricted Boltzmann machine to form a deep belief network prediction model unit, and trains the model without supervision, thereby improving learning performance and prediction accuracy. While improving the existing neural network algorithm to learn and analyze historical data, it also improves the learning efficiency. The simulation results show that, compared with the traditional neural network forecasting system, the power load forecasting system based on the deep belief network of the present invention has improved power load forecasting accuracy.

以上所述的实施例仅用于说明本发明的技术思想及特点,其目的在于使本领域内的技术人员能够理解本发明的内容并据以实施,不能仅以本实施例来限定本发明的专利范围,即凡本发明所揭示的精神所作的同等变化或修饰,仍落在本发明的专利范围内。The above-described embodiments are only used to illustrate the technical ideas and characteristics of the present invention, and its purpose is to enable those skilled in the art to understand the content of the present invention and implement it accordingly. The present invention cannot be limited only by this embodiment. The scope of the patent, that is, all equivalent changes or modifications made to the spirit disclosed in the present invention still fall within the scope of the patent of the present invention.

Claims (10)

1. A power load prediction method based on a deep belief network is characterized in that a sparse self-coding neural network is adopted to carry out aggregation processing on historical data of a power load; constructing a composite optimized deep belief network prediction model based on a restricted Boltzmann machine; the deep belief network prediction model comprises the following components in sequence from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine are pre-trained by adopting an unsupervised training method, and then parameter fine tuning is carried out by adopting a BP algorithm with an impulse term; and inputting the aggregated historical data into the deep belief network prediction model for prediction.
2. The deep belief network-based power load prediction method of claim 1, wherein historical data of the power load is collected by dividing each data type to form an input feature vector, taking into account factors of date, weather and demand side management information.
3. The deep belief network-based power load prediction method of claim 1, wherein a normalization pre-processing is performed on historical data of the power load before the data are aggregated by using a sparse self-coding neural network.
4. The deep belief network-based power load prediction method of claim 1, wherein the sparse self-encoding neural network employs a two-layer or three-layer neural network; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
5. The deep belief network-based power load prediction method of claim 1, wherein the gaussian-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine are each constructed using two layers, the two layers include a visible layer and a hidden layer, the units between the two layers are connected with each other, and no connection exists between every two units on the same layer.
6. a power load prediction system based on a deep belief network is characterized by comprising a data preprocessing unit and a deep belief network prediction model unit; wherein: the data preprocessing unit comprises a sparse self-coding neural network subunit, and the sparse self-coding neural network subunit inputs historical data of the power load and performs aggregation processing; the deep belief network prediction model unit sequentially comprises the following components from input to output: the system comprises a Gaussian-Bernoulli limited Boltzmann machine, a Bernoulli-Bernoulli limited Boltzmann machine and a linear regression output layer, wherein the Gaussian-Bernoulli limited Boltzmann machine, the Bernoulli-Bernoulli limited Boltzmann machine and the linear regression output layer are pre-trained by adopting an unsupervised training mode, then parameter fine tuning is carried out by adopting a BP algorithm added with a momentum term, historical data processed by a data preprocessing unit are input, and a predicted value of the power load is output.
7. the deep belief network-based power load prediction system of claim 6, wherein the data pre-processing unit further comprises a normalization pre-processing subunit; and the normalization preprocessing subunit performs normalization preprocessing on the historical data of the power load, and the processed data is output to the sparse self-coding neural network subunit.
8. the deep belief network-based power load prediction system of claim 6, wherein the sparse self-encoding neural network subunit comprises a two-layer or three-layer neural network; the two layers of neural networks comprise an input layer and an output layer, and the three layers of neural networks comprise an input layer, a hidden layer and an output layer.
9. The deep belief network-based power load prediction system of claim 6, wherein the Gaussian-Bernoulli limited Boltzmann machine and the Bernoulli-Bernoulli limited Boltzmann machine each comprise two layers, a hidden layer and a visible layer, respectively, wherein the units between the two layers are connected to each other, and no connection exists between any two units on the same layer.
10. The deep belief network-based power load prediction system of claim 9, wherein when the deep belief network prediction model unit is trained, the gaussian-bernoulli limited boltzmann machine and the bernoulli-bernoulli limited boltzmann machine are each stacked with a temporary output layer, and then are each pre-trained in an unsupervised training mode, and then are further parameter-fine-tuned by using a Back Propagation (BP) algorithm with an impulse term.
CN201910722953.1A 2019-08-06 2019-08-06 A Power Load Forecasting Method and System Based on Deep Belief Network Pending CN110580543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910722953.1A CN110580543A (en) 2019-08-06 2019-08-06 A Power Load Forecasting Method and System Based on Deep Belief Network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910722953.1A CN110580543A (en) 2019-08-06 2019-08-06 A Power Load Forecasting Method and System Based on Deep Belief Network

Publications (1)

Publication Number Publication Date
CN110580543A true CN110580543A (en) 2019-12-17

Family

ID=68810919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910722953.1A Pending CN110580543A (en) 2019-08-06 2019-08-06 A Power Load Forecasting Method and System Based on Deep Belief Network

Country Status (1)

Country Link
CN (1) CN110580543A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028512A (en) * 2019-12-31 2020-04-17 福建工程学院 A real-time traffic prediction method and device based on sparse BP neural network
CN111144643A (en) * 2019-12-24 2020-05-12 天津相和电气科技有限公司 Day-ahead power load forecasting method and device based on double-terminal automatic coding
CN111366889A (en) * 2020-04-29 2020-07-03 云南电网有限责任公司电力科学研究院 A method for detecting abnormal electricity consumption of a smart meter
CN111598225A (en) * 2020-05-15 2020-08-28 西安建筑科技大学 Air conditioner cold load prediction method based on adaptive deep confidence network
CN112016799A (en) * 2020-07-15 2020-12-01 北京淇瑀信息科技有限公司 Resource quota allocation method and device and electronic equipment
CN112036598A (en) * 2020-06-24 2020-12-04 国网天津市电力公司电力科学研究院 A charging pile usage information prediction method based on multi-information coupling
CN112232547A (en) * 2020-09-09 2021-01-15 国网浙江省电力有限公司营销服务中心 Special transformer user short-term load prediction method based on deep belief neural network
CN112381297A (en) * 2020-11-16 2021-02-19 国家电网公司华中分部 Method for predicting medium-term and long-term electricity consumption in region based on social information calculation
CN112418504A (en) * 2020-11-17 2021-02-26 西安热工研究院有限公司 Wind speed prediction method based on mixed variable selection optimization deep belief network
CN112418526A (en) * 2020-11-24 2021-02-26 国网天津市电力公司 Comprehensive energy load control method and device based on improved deep belief network
CN112580853A (en) * 2020-11-20 2021-03-30 国网浙江省电力有限公司台州供电公司 Bus short-term load prediction method based on radial basis function neural network
CN112578896A (en) * 2020-12-18 2021-03-30 Oppo(重庆)智能科技有限公司 Frequency adjusting method, frequency adjusting device, electronic apparatus, and storage medium
CN112650894A (en) * 2020-12-30 2021-04-13 国网甘肃省电力公司营销服务中心 Multidimensional analysis and diagnosis method for user electricity consumption behaviors based on combination of analytic hierarchy process and deep belief network
CN113297791A (en) * 2021-05-18 2021-08-24 四川大川云能科技有限公司 Wind power combined prediction method based on improved DBN
CN113378464A (en) * 2021-06-09 2021-09-10 国网天津市电力公司营销服务中心 Method and device for predicting service life of electric energy meter field tester
CN113822475A (en) * 2021-09-15 2021-12-21 浙江浙能技术研究院有限公司 Thermal load prediction and control method for auxiliary machine fault load reduction working condition of steam extraction heat supply unit
CN113837486A (en) * 2021-10-11 2021-12-24 云南电网有限责任公司 RNN-RBM-based distribution network feeder long-term load prediction method
CN114913380A (en) * 2022-06-15 2022-08-16 齐鲁工业大学 Feature extraction method and system based on multi-core collaborative learning and deep belief network
CN115578122A (en) * 2022-10-17 2023-01-06 国网山东省电力公司淄博供电公司 Load electricity price prediction method based on sparse self-coding nonlinear autoregressive network
CN115936060A (en) * 2022-12-28 2023-04-07 四川物通科技有限公司 Transformer substation capacitance temperature early warning method based on depth certainty strategy gradient
CN117094361A (en) * 2023-10-19 2023-11-21 北京中科汇联科技股份有限公司 Method for selecting parameter efficient fine adjustment module
CN117937464A (en) * 2024-01-09 2024-04-26 广东电网有限责任公司广州供电局 Short-term power load prediction method based on PSR-DBN (Power System support-direct-base network) combined model
CN118569453A (en) * 2024-08-01 2024-08-30 四川仕虹腾飞信息技术有限公司 Method and system for predicting flyer in financial sales process of banking outlets

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140156575A1 (en) * 2012-11-30 2014-06-05 Nuance Communications, Inc. Method and Apparatus of Processing Data Using Deep Belief Networks Employing Low-Rank Matrix Factorization
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107730039A (en) * 2017-10-10 2018-02-23 中国南方电网有限责任公司电网技术研究中心 Method and system for predicting load of power distribution network
CN108664690A (en) * 2018-03-24 2018-10-16 北京工业大学 Long-life electron device reliability lifetime estimation method under more stress based on depth belief network
CN110009160A (en) * 2019-04-11 2019-07-12 东北大学 An Electricity Price Prediction Method Based on Improved Deep Belief Network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140156575A1 (en) * 2012-11-30 2014-06-05 Nuance Communications, Inc. Method and Apparatus of Processing Data Using Deep Belief Networks Employing Low-Rank Matrix Factorization
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107730039A (en) * 2017-10-10 2018-02-23 中国南方电网有限责任公司电网技术研究中心 Method and system for predicting load of power distribution network
CN108664690A (en) * 2018-03-24 2018-10-16 北京工业大学 Long-life electron device reliability lifetime estimation method under more stress based on depth belief network
CN110009160A (en) * 2019-04-11 2019-07-12 东北大学 An Electricity Price Prediction Method Based on Improved Deep Belief Network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
XIAOYU ZHANG: "Short-Term Load Forecasting Based on a Improved Deep Belief Network", 《2016 INTERNATIONAL CONFERENCE ON SMART GRID AND CLEAN ENERGY TECHNOLOGIES》 *
XIAOYU ZHANG: "Short-Term Load Forecasting Using a Novel Deep Learning Framework", 《ENERGIES》 *
孔祥玉: "Improved Deep Belief Network for Short-Term Load Forecasting Considering Demand-Side Management", 《IEEE TRANSACTIONS ON POWER SYSTEMS》 *
孔祥玉: "基于深度信念网络的短期负荷预测方法", 《电力系统自动化》 *
孙海蓉: "基于深度学习的短时热网负荷预测", 《计算机仿真》 *
杨智宇: "基于自适应深度信念网络的变电站负荷预测", 《中国电机工程学报》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144643A (en) * 2019-12-24 2020-05-12 天津相和电气科技有限公司 Day-ahead power load forecasting method and device based on double-terminal automatic coding
CN111028512A (en) * 2019-12-31 2020-04-17 福建工程学院 A real-time traffic prediction method and device based on sparse BP neural network
CN111366889A (en) * 2020-04-29 2020-07-03 云南电网有限责任公司电力科学研究院 A method for detecting abnormal electricity consumption of a smart meter
CN111366889B (en) * 2020-04-29 2022-01-25 云南电网有限责任公司电力科学研究院 Abnormal electricity utilization detection method for intelligent electric meter
CN111598225B (en) * 2020-05-15 2023-05-02 西安建筑科技大学 A method for air conditioning cooling load forecasting based on adaptive deep belief network
CN111598225A (en) * 2020-05-15 2020-08-28 西安建筑科技大学 Air conditioner cold load prediction method based on adaptive deep confidence network
CN112036598A (en) * 2020-06-24 2020-12-04 国网天津市电力公司电力科学研究院 A charging pile usage information prediction method based on multi-information coupling
CN112036598B (en) * 2020-06-24 2025-02-11 国网天津市电力公司电力科学研究院 A charging pile usage information prediction method based on multi-information coupling
CN112016799A (en) * 2020-07-15 2020-12-01 北京淇瑀信息科技有限公司 Resource quota allocation method and device and electronic equipment
CN112232547A (en) * 2020-09-09 2021-01-15 国网浙江省电力有限公司营销服务中心 Special transformer user short-term load prediction method based on deep belief neural network
CN112232547B (en) * 2020-09-09 2023-12-12 国网浙江省电力有限公司营销服务中心 Special transformer user short-term load prediction method based on deep confidence neural network
CN112381297A (en) * 2020-11-16 2021-02-19 国家电网公司华中分部 Method for predicting medium-term and long-term electricity consumption in region based on social information calculation
CN112418504A (en) * 2020-11-17 2021-02-26 西安热工研究院有限公司 Wind speed prediction method based on mixed variable selection optimization deep belief network
CN112418504B (en) * 2020-11-17 2023-02-28 西安热工研究院有限公司 Wind speed prediction method based on mixed variable selection optimization deep belief network
CN112580853A (en) * 2020-11-20 2021-03-30 国网浙江省电力有限公司台州供电公司 Bus short-term load prediction method based on radial basis function neural network
CN112418526A (en) * 2020-11-24 2021-02-26 国网天津市电力公司 Comprehensive energy load control method and device based on improved deep belief network
CN112578896A (en) * 2020-12-18 2021-03-30 Oppo(重庆)智能科技有限公司 Frequency adjusting method, frequency adjusting device, electronic apparatus, and storage medium
CN112650894A (en) * 2020-12-30 2021-04-13 国网甘肃省电力公司营销服务中心 Multidimensional analysis and diagnosis method for user electricity consumption behaviors based on combination of analytic hierarchy process and deep belief network
CN113297791A (en) * 2021-05-18 2021-08-24 四川大川云能科技有限公司 Wind power combined prediction method based on improved DBN
CN113297791B (en) * 2021-05-18 2024-02-06 四川大川云能科技有限公司 Wind power combination prediction method based on improved DBN
CN113378464A (en) * 2021-06-09 2021-09-10 国网天津市电力公司营销服务中心 Method and device for predicting service life of electric energy meter field tester
CN113822475A (en) * 2021-09-15 2021-12-21 浙江浙能技术研究院有限公司 Thermal load prediction and control method for auxiliary machine fault load reduction working condition of steam extraction heat supply unit
CN113822475B (en) * 2021-09-15 2023-11-21 浙江浙能技术研究院有限公司 Thermal load prediction and control method for auxiliary machine fault load-reduction working condition of steam extraction heat supply unit
CN113837486A (en) * 2021-10-11 2021-12-24 云南电网有限责任公司 RNN-RBM-based distribution network feeder long-term load prediction method
CN113837486B (en) * 2021-10-11 2023-08-22 云南电网有限责任公司 RNN-RBM-based distribution network feeder long-term load prediction method
CN114913380A (en) * 2022-06-15 2022-08-16 齐鲁工业大学 Feature extraction method and system based on multi-core collaborative learning and deep belief network
CN115578122A (en) * 2022-10-17 2023-01-06 国网山东省电力公司淄博供电公司 Load electricity price prediction method based on sparse self-coding nonlinear autoregressive network
CN115936060A (en) * 2022-12-28 2023-04-07 四川物通科技有限公司 Transformer substation capacitance temperature early warning method based on depth certainty strategy gradient
CN115936060B (en) * 2022-12-28 2024-03-26 四川物通科技有限公司 Substation capacitance temperature early warning method based on depth deterministic strategy gradient
CN117094361A (en) * 2023-10-19 2023-11-21 北京中科汇联科技股份有限公司 Method for selecting parameter efficient fine adjustment module
CN117094361B (en) * 2023-10-19 2024-01-26 北京中科汇联科技股份有限公司 Method for selecting parameter efficient fine adjustment module
CN117937464A (en) * 2024-01-09 2024-04-26 广东电网有限责任公司广州供电局 Short-term power load prediction method based on PSR-DBN (Power System support-direct-base network) combined model
CN118569453A (en) * 2024-08-01 2024-08-30 四川仕虹腾飞信息技术有限公司 Method and system for predicting flyer in financial sales process of banking outlets
CN118569453B (en) * 2024-08-01 2024-10-08 四川仕虹腾飞信息技术有限公司 Method and system for predicting flyer in financial sales process of banking outlets

Similar Documents

Publication Publication Date Title
CN110580543A (en) A Power Load Forecasting Method and System Based on Deep Belief Network
Ke et al. Short-term electrical load forecasting method based on stacked auto-encoding and GRU neural network
Shamshirband et al. A survey of deep learning techniques: application in wind and solar energy resources
Tang et al. Short‐term power load forecasting based on multi‐layer bidirectional recurrent neural network
Zhou et al. Prediction of photovoltaic power output based on similar day analysis, genetic algorithm and extreme learning machine
Yin et al. Deep forest regression for short-term load forecasting of power systems
Liu et al. Ultra-short-term wind power forecasting based on deep Bayesian model with uncertainty
CN107292453A (en) A kind of short-term wind power prediction method based on integrated empirical mode decomposition Yu depth belief network
CN108038580A (en) The multi-model integrated Forecasting Methodology of photovoltaic power based on synchronous extruding wavelet transformation
Khodayar et al. Robust deep neural network for wind speed prediction
CN108197751A (en) Seq2seq network Short-Term Load Forecasting Methods based on multilayer Bi-GRU
CN105184678A (en) Method for constructing photovoltaic power station generation capacity short-term prediction model based on multiple neural network combinational algorithms
CN110717610B (en) A wind power power prediction method based on data mining
CN106503867A (en) A kind of genetic algorithm least square wind power forecasting method
CN112418526A (en) Comprehensive energy load control method and device based on improved deep belief network
CN106022549A (en) Short term load predication method based on neural network and thinking evolutionary search
CN114169445A (en) Day-ahead photovoltaic power prediction method, device and system based on CAE and GAN hybrid network
CN115860177A (en) Photovoltaic power generation power prediction method based on combined machine learning model and application thereof
CN115481788B (en) Phase change energy storage system load prediction method and system
Wang et al. Prediction method of wind farm power generation capacity based on feature clustering and correlation analysis
CN116384572A (en) Sequence-to-sequence electric load forecasting method based on multidimensional gated recurrent unit
Wattal et al. Deep learning based forecasting of consumption of petroleum products in india
Lin et al. A novel multi-model stacking ensemble learning method for metro traction energy prediction
CN112270440A (en) A Load Prediction Method of Distribution Network Based on Capsule Neural Network
Hu et al. Incremental forecaster using C–C algorithm to phase space reconstruction and broad learning network for short-term wind speed prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191217