CN110909941A - Power load prediction method, device and system based on LSTM neural network - Google Patents

Power load prediction method, device and system based on LSTM neural network Download PDF

Info

Publication number
CN110909941A
CN110909941A CN201911172170.7A CN201911172170A CN110909941A CN 110909941 A CN110909941 A CN 110909941A CN 201911172170 A CN201911172170 A CN 201911172170A CN 110909941 A CN110909941 A CN 110909941A
Authority
CN
China
Prior art keywords
load
neural network
load data
power
lstm neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911172170.7A
Other languages
Chinese (zh)
Other versions
CN110909941B (en
Inventor
栾乐
汤毅
葛馨远
刘田
陈国炎
周凡珂
陈海涛
彭和平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau Co Ltd filed Critical Guangzhou Power Supply Bureau Co Ltd
Priority to CN201911172170.7A priority Critical patent/CN110909941B/en
Publication of CN110909941A publication Critical patent/CN110909941A/en
Application granted granted Critical
Publication of CN110909941B publication Critical patent/CN110909941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a power load prediction method, a device and a system based on an LSTM neural network, wherein the method comprises the following steps: acquiring local super parameters and load data of each power device in a preset time period; classifying the load data of each power device into a plurality of load data sets based on the identity information of the corresponding load data of the power device; starting a preset number of threads, and respectively reading corresponding load data sets based on the threads; and respectively processing the corresponding load data sets and local super parameters through each thread based on a pre-established LSTM neural network model to obtain power load prediction data. According to the method and the device, local hyper-parameters can be used, a pre-built LSTM neural network model is combined, and multithreading is used for load prediction, so that the training times are reduced, the time consumed by load prediction is reduced, and the power load prediction efficiency is improved.

Description

Power load prediction method, device and system based on LSTM neural network
Technical Field
The present application relates to the field of load prediction technologies, and in particular, to a method, an apparatus, and a system for predicting a power load based on an LSTM neural network.
Background
With the development of power systems, load prediction is also becoming one of the important jobs of the power section. The accurate load prediction can economically and reasonably arrange the start and stop of the generator set in the power grid, maintain the safety and stability of the operation of the power grid, reasonably arrange the maintenance plan of the generator set, ensure the normal production and life of the society, effectively reduce the power generation cost and improve the economic benefit and the social benefit.
Currently available load prediction technologies include traditional load prediction technologies and modern intelligent load prediction technologies. The traditional load prediction technology has low requirements on historical data, so that the traditional load prediction technology is widely applied in the early stage, but has the defect of low prediction accuracy. The modern intelligent load prediction technology has higher prediction accuracy, but the application of the prediction technology is limited due to high dependence on data quality, because the operation characteristics of different loads are different, a large amount of computing resources are occupied, when the number of loads needing to be predicted is increased in a large scale, the time required by network training and output is greatly increased, and the efficiency of the network training and output in large-scale application is difficult to ensure.
In the implementation process, the inventor finds that at least the following problems exist in the conventional technology: conventional power load prediction is inefficient.
Disclosure of Invention
Therefore, it is necessary to provide a power load prediction method, device and system based on the LSTM neural network to solve the problem of low efficiency of conventional power load prediction.
In order to achieve the above object, an embodiment of the present invention provides an electric load prediction method based on an LSTM neural network, including the following steps:
acquiring local super parameters and load data of each power device in a preset time period;
classifying the load data of each power device into a plurality of load data sets based on the identity information of the corresponding load data of the power device;
starting a preset number of threads, and respectively reading corresponding load data sets based on the threads;
and respectively processing the corresponding load data sets and local super parameters through each thread based on a pre-established LSTM neural network model to obtain power load prediction data.
In one embodiment, the step of obtaining the local hyper-parameter comprises:
inquiring local hyper-parameters of the corresponding power equipment serial number file in the database to which the power equipment belongs;
and reading the local hyper-parameters corresponding to the power equipment number files according to the query result.
In one embodiment, the step of reading the corresponding load data set on a thread basis comprises:
respectively carrying out data preprocessing on each load data set to obtain each preprocessed load data set;
and respectively reading the corresponding preprocessed load data sets on the basis of each thread.
In one embodiment, the step of obtaining the power load prediction data by respectively processing the corresponding load data set and the local hyper-parameter through each thread based on the previously established LSTM neural network model comprises the following steps:
training local hyper-parameters based on an LSTM neural network model to obtain post-training hyper-parameters;
and updating the local hyper-parameter into a post-training hyper-parameter.
In one embodiment, the method further comprises the following steps:
after the threads process the preset number of load data sets, closing the threads; and the thread is restarted for the next round of processing.
In one embodiment, the step of obtaining the power load prediction data by respectively processing the corresponding load data set and the local hyper-parameter through each thread based on the previously established LSTM neural network model further includes:
establishing a global array of corresponding threads;
and caching the power load prediction data of the corresponding thread in a corresponding global array.
In one embodiment, the load data set includes at least one electrical device load data.
On the other hand, an embodiment of the present invention further provides a power load prediction apparatus for an LSTM neural network, including:
the data acquisition unit is used for acquiring local hyper-parameters and load data of each power device in a preset time period;
the data set dividing unit is used for classifying the load data of each power device into a plurality of load data sets based on the identity information of the load data of the corresponding power device;
the thread starting unit is used for starting threads with preset number and respectively reading corresponding load data sets based on the threads;
and the load prediction unit is used for respectively processing the corresponding load data sets and the local super-parameters through each thread based on a pre-established LSTM neural network model to obtain power load prediction data.
On the other hand, the embodiment of the present invention further provides a power load prediction system of an LSTM neural network, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the power load prediction methods of the LSTM neural network when executing the computer program.
In another aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the power load prediction method of the LSTM neural network described above.
One of the above technical solutions has the following advantages and beneficial effects:
in each embodiment of the power load prediction method of the LSTM neural network, the local hyper-parameter and the load data of each power device in a preset time period are obtained; classifying the load data of each power device into a plurality of load data sets based on the identity information of the corresponding load data of the power device; starting a preset number of threads, and respectively reading corresponding load data sets based on the threads; based on a pre-established LSTM neural network model, the corresponding load data sets and local super parameters are processed by each thread respectively to obtain power load prediction data, and further the load prediction of large-scale power equipment can be realized. According to the method and the device, local hyper-parameters can be used, a pre-built LSTM neural network model is combined, and multithreading is used for load prediction, so that the training times are reduced, the time consumed by load prediction is reduced, and the power load prediction efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of an application environment of a power load prediction method of the LSTM neural network in one embodiment;
FIG. 2 is a first flowchart of a method for predicting a power load of an LSTM neural network according to an embodiment;
FIG. 3 is a second flow diagram of a method for power load prediction for an LSTM neural network in accordance with an embodiment;
FIG. 4 is a third flowchart of a method for predicting a power load of the LSTM neural network according to an embodiment;
FIG. 5 is a fourth flowchart illustrating a method for predicting a power load of the LSTM neural network in accordance with an embodiment;
FIG. 6 is a schematic diagram of the time taken for prediction of the power load prediction method of the LSTM neural network in one embodiment;
FIG. 7 is a schematic diagram of an electrical load prediction apparatus of the LSTM neural network in one embodiment;
FIG. 8 is a block diagram of a system for predicting electrical load for an LSTM neural network in accordance with one embodiment.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present application are shown in the drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
At present, the traditional load prediction technology comprises a time series method, a trend extrapolation method and the like, and the main prediction strategy is to obtain a load prediction value according to load historical data by using a statistical method. For example, a short-term prediction analysis is carried out on the power load by using a time series ARIMA model, or a short-term prediction is carried out on the load by using a curve extrapolation method. The traditional load prediction technology has low requirements on historical data, so that the traditional load prediction technology is widely applied in the early stage, but has the defect of low prediction precision. With the development of power systems, it has become possible to accurately obtain data required for various predictions, and modern intelligent load prediction techniques with higher accuracy are beginning to be applied.
Modern intelligent load prediction technologies include artificial neural network methods, Support Vector Machines (SVMs), and the like, and among them, the most widely used are the artificial neural network methods. For example, an LSTM neural network is used to predict the electrical load, or a BP neural network is used to predict the short term load. The high dependence of modern intelligent load prediction technology on data quality limits the application of the prediction technology, and more importantly, because the operation characteristics of different loads are different, it is a common practice to construct a corresponding network model for each group of loads, but the method occupies a large amount of computing resources. In addition, since load prediction is a time series prediction problem, new load data is continuously generated every day for training, and the demand for computational efficiency is further increased. When the number of loads needing to be predicted is increased on a large scale, the time required by network training and output is also greatly increased, and the efficiency of the network training and output is difficult to ensure in large-scale application.
In each embodiment of the power load prediction method of the LSTM neural network, the load prediction can be performed by using the local hyper-parameter, combining the previously established LSTM neural network model and using multiple threads, so that the training times are reduced, the time consumed by the load prediction is reduced, and the power load prediction efficiency is improved.
The LSTM neural network-based power load prediction method provided by the application can be applied to the application environment shown in FIG. 1. The terminal 102 communicates with the power device 104 through a network. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the power device 104 may be implemented by an independent power device or a power device cluster formed by a plurality of power devices.
In one embodiment, as shown in fig. 2, there is provided a power load prediction method based on an LSTM neural network, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S210, obtaining a local hyper-parameter and load data of each electrical device within a preset time period.
Wherein, local hyper-parameters refer to parameters that, in the context of machine learning, are set values before starting the learning process; the local super-parameter is a neural network super-parameter used for training the load data of the power equipment. The electrical device may be, but is not limited to, a distribution transformer. The power equipment load data refers to load data of the power equipment, and the power equipment load data may be the sum of electric power taken by the electric equipment of the electric energy user to the power equipment at a certain moment. The number of electrical loads connected to the power equipment may be one or more.
Specifically, local hyper-parameters corresponding to the power equipment are acquired, and then new data processing based on the original training results can be started according to the acquired local hyper-parameters.
Step S220, classifying the load data of the electrical equipment into a plurality of load data sets based on the identity information of the load data of the corresponding electrical equipment.
The Identity information may be an ID number (unique code) corresponding to the power equipment load data. The load data set may be a data set containing at least one electrical device load data; i.e. the load data set comprises at least one electrical device load data.
Specifically, based on the identity information of the corresponding electrical equipment load data, the electrical equipment load data of the same type can be classified into one load data set, and then each electrical equipment load data is classified into a plurality of load data sets.
Step S230, a preset number of threads are started, and corresponding load data sets are respectively read based on the threads.
The thread is the smallest unit in which the processing system can perform operation scheduling. It is included in the process and is the actual unit of operation in the process. It should be noted that the specific numerical value of the preset number can be determined according to actual experiments. Different systems can obtain the most suitable thread number through testing.
Specifically, by starting threads with preset data and reading corresponding load data sets based on the threads, the resource utilization rate of the system can be improved by utilizing multiple threads, and the load prediction efficiency is improved.
And step S240, based on the previously established LSTM neural network model, respectively processing the corresponding load data set and the local super-parameter through each thread to obtain power load prediction data.
The LSTM (Long Short-Term Memory) neural network model refers to a time-cycle neural network model. The power load prediction data may be used to indicate a load condition of the power device.
Specifically, each thread can process the corresponding load data set and the local super-parameter respectively based on a pre-established LSTM neural network model, so as to obtain power load prediction data and realize power load prediction of the power equipment.
In one example, by constructing a neural network initially for each thread, all of the distribution transforms within the thread use the same network, thereby reducing the time it takes to construct the neural network.
In the embodiment of the power load prediction method of the LSTM neural network, the local hyper-parameter and the load data of each power device in a preset time period are obtained; classifying the load data of each power device into a plurality of load data sets based on the identity information of the corresponding load data of the power device; starting a preset number of threads, and respectively reading corresponding load data sets based on the threads; based on a pre-established LSTM neural network model, the corresponding load data sets and local super parameters are processed by each thread respectively to obtain power load prediction data, and further the load prediction of large-scale power equipment can be realized. By using the local hyper-parameters, combining the previously built LSTM neural network model and using multithreading to predict the load, the training times are reduced, the time consumed by load prediction is reduced, and the power load prediction efficiency is improved.
In one example, taking a practical system as an example, the operation time of a single thread increases slightly but not significantly when the number of threads is increased by 1-25, and the operation time of a single thread increases faster when the number of threads is increased to more than 26. After the read-write database is separated from the threads, the threads run synchronously without mutual interference. The average running time of each thread is obtained by dividing the average running time of each thread by the number of threads, and the fact that the average running time of each thread is firstly reduced and then increased along with the increase of the number of threads can be found, namely 33 threads with the highest efficiency can be selected for the system. Different systems can obtain the most suitable thread number through testing.
In a specific embodiment, the step of obtaining the local hyper-parameter includes:
inquiring local hyper-parameters of the corresponding power equipment serial number file in the database to which the power equipment belongs;
and reading the local hyper-parameters corresponding to the power equipment number files according to the query result.
Wherein, the database can be used for storing data such as local hyper-parameters and the like. The power device number file refers to a serial number of the power device. It should be noted that, the local hyper-parameter may be stored in the database according to the power equipment number file, and the local hyper-parameter of the early training data may be stored.
Specifically, local hyper-parameters of a corresponding power equipment serial number file in a database to which the power equipment belongs are inquired; according to the query result, the local hyper-parameters of the corresponding power equipment serial number file can be read, and repeated training can be avoided; meanwhile, the subsequent training set is adjusted, so that the training times during subsequent training are reduced, the occupation of computing resources is reduced, and the power load prediction efficiency is improved.
In one example, in practical short-term prediction applications, the algorithm needs to give predictions each day over a period of time. Between every two days, the dataset gap includes only one newly added day, and the percentage of this newly added data in the entire dataset is typically small. Thus, there is no need to start training again from 0 every day. The first day of program deployment is removed, and training based on a new data set is started on the basis of the original training result from the second day, so that the subsequent calculation speed can be greatly improved on the premise of not influencing the accuracy of the algorithm.
In a specific embodiment, the step of reading the corresponding load data sets on a thread basis comprises:
respectively carrying out data preprocessing on each load data set to obtain each preprocessed load data set;
and respectively reading the corresponding preprocessed load data sets on the basis of each thread.
The data preprocessing can be, but is not limited to, supplementing data, removing unique attributes, processing missing values, encoding attributes, normalizing and regularizing data, feature selection, principal component analysis, and the like.
Specifically, data preprocessing is performed on each load data set to obtain each preprocessed load data set, and then the corresponding preprocessed load data sets can be read based on each thread. By performing data preprocessing on the load data set, the accuracy of power load prediction is improved.
In one embodiment, as shown in fig. 3, there is provided a power load prediction method based on an LSTM neural network, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S310, obtain the local super-parameter and the load data of each electrical device in the preset time period.
Step S320, classifying the load data of the electrical equipment into a plurality of load data sets based on the identity information of the load data of the corresponding electrical equipment.
Step S330, a preset number of threads are started, and corresponding load data sets are read based on the threads.
And step S340, based on the previously established LSTM neural network model, respectively processing the corresponding load data set and the local super-parameter through each thread to obtain power load prediction data.
And step S350, training the local hyper-parameter based on the LSTM neural network model to obtain the post-training hyper-parameter.
And step S360, updating the local hyper-parameter into the post-training hyper-parameter.
The specific content processes of the steps S310, S320, S330, and S340 may refer to the above contents, and are not described herein again.
Specifically, in order to satisfy the power load prediction accuracy and the power load prediction efficiency, the prediction process and the training process are separated. The method comprises the steps of obtaining local hyper-parameters and load data of each power device in a preset time period; classifying the load data of each power device into a plurality of load data sets based on the identity information of the corresponding load data of the power device; starting a preset number of threads, and respectively reading corresponding load data sets based on the threads; based on a pre-established LSTM neural network model, the corresponding load data sets and local super parameters are processed by each thread respectively to obtain power load prediction data, and further the load prediction of large-scale power equipment can be realized. And training the local hyper-parameter based on the LSTM neural network model to obtain a post-training hyper-parameter, and updating the local hyper-parameter into the post-training hyper-parameter to realize the timely updating of the local hyper-parameter. When the early data are input, the existing local super-parameters are used for predicting the power load of the power equipment, and after all the power equipment are predicted, the input new data are used for training and updating the local super-parameters, so that the training times can be reduced, the time consumed by load prediction is reduced, and the power load prediction efficiency is improved.
In one embodiment, as shown in fig. 4, there is provided an LSTM neural network-based power load prediction method, which is described by taking the application of the method to the terminal in fig. 1 as an example, and includes the following steps:
step S410, acquiring local hyper-parameters and load data of each electrical device within a preset time period.
Step S420, classifying the load data of the electrical equipment into a plurality of load data sets based on the identity information of the load data of the corresponding electrical equipment.
Step S430, a preset number of threads are started, and corresponding load data sets are read based on the threads respectively.
And step S440, based on the previously established LSTM neural network model, respectively processing the corresponding load data set and the local super-parameter through each thread to obtain power load prediction data.
The specific content processes of step S410, step S420, step S430 and step S440 may refer to the above contents, and are not described herein again.
Step S450, after the thread processes a preset number of load data sets, closing the thread; and the thread is restarted for the next round of processing.
The preset number of load data sets can be obtained according to system presetting.
Specifically, in the prediction processing process and the training processing process, after the threads process a preset number of load data sets, the threads can be closed and a new thread can be restarted to continue the next data processing, so that the reduction of the prediction efficiency caused by the increase of the data of the computing power equipment can be eliminated; therefore, load prediction of large-scale power equipment can be realized, training times are reduced, time consumption of load prediction is reduced, and power load prediction efficiency is improved.
In one embodiment, the step of obtaining the power load prediction data by processing the corresponding load data set and the local hyper-parameter through each thread based on the previously established LSTM neural network model further includes:
establishing a global array of corresponding threads;
and caching the power load prediction data of the corresponding thread in a corresponding global array.
The scope of the global array is global, and the scope of the global array is valid in the running period of the whole program.
Specifically, when local data is stored in the global array, the system automatically generates a thread lock to stop rewriting of the global array by other threads. And by establishing the global array of the corresponding thread and caching the power load prediction data of the corresponding thread in the corresponding global array, the data can be uniformly stored in the total global array when the data processing tasks of all threads are completely finished. Due to the fact that the prediction and training parts of the multiple threads are operated, the processes of the threads generate certain deviation, when one thread rewrites the global array, other threads still perform prediction or finish operation, the problem of thread locking is reduced to the maximum extent, and the power load prediction efficiency of the power equipment is improved.
In one example, when load prediction is performed using the LSTM neural network, the prediction and training time for program execution is relatively long. If the method is applied to a large-scale load prediction scene, the total time consumption of prediction and training exceeds the number of days which can be predicted, so that the actual load data is obtained when the prediction result is obtained.
In actual environment, the CPU occupancy rate of the system is low and is about 3% when the load prediction program is operated. The utilization rate of system resources is low, and there are two main reasons as follows: the number of CPU cores allocated to programs by the system is small; the neural network has a part of CPU utilization rate in the process of building, training and predicting. Therefore, the utilization rate of system resources is improved by utilizing multiple threads, and the load forecasting efficiency is improved. The load prediction program is used as one thread, and a plurality of threads are started simultaneously. Gradual increase in the utilization of the system CPU is observed. The prediction time of each distribution transformer load is not effectively improved after the measurement and the calculation. The following problems mainly exist:
(1) the database does not support simultaneous access of a plurality of threads, and when a certain thread operates the database, other threads need to stop running through the thread lock. And the database needs to be read and written in each thread, so that the database read-write time of other threads needs to be increased when the running time of one thread is increased.
(2) Because the time required for screening in the database is long, when a plurality of threads run, the database reading time is far longer than the training and prediction time of the neural network
(3) As the number of threads increases, the efficiency of a single thread decreases.
In order to solve the problems, firstly, indexes are added to the time and the distribution transformation name of the database to improve the data reading efficiency of the database. Under the condition that the memory is enough, before the load prediction program, the data of all the distribution variables in the set time is read into the memory at one time, so that the database is prevented from being read for many times. Meanwhile, the data of each prediction is stored into a global array, and the data is stored into the database once after the prediction is finished. The problem that other threads stop running due to the fact that the database is read and written in the threads is solved, and prediction efficiency is improved; meanwhile, the read data is uniformly preprocessed.
In one embodiment, as shown in FIG. 5, a method for power load prediction based on an LSTM neural network is provided. The specific work flow of the power load prediction comprises a data preprocessing process, a prediction process and a training process.
In a certain actual load prediction scenario, about 12000 distribution transformers (power equipment) are recorded daily at a frequency of 96 pieces of load data. The program needs to provide prediction data daily for 3 days after each distribution.
The software running hardware environment is as follows: processors E5-2698 v4, a memory 64GB and a hard disk 300G; software environment: operating system Redhat 7.4, python version 3.6, database oracle 12c (64 bits).
The conventional LSTM software structure is as follows: the data set used the previous 2 months of data, epoch 70, TIME _ STEP 96 × 3, INPUT _ SIZE 1, OUT _ SIZE 1, CELL _ SIZE 8, LR 0.002. At the moment, the training time of each group of distribution transformer load is 15min, 125 days are needed for completing the training, and the daily prediction requirement cannot be met.
Store and training set adjustment through the super parameter of using this application, the first training adopts the epoch to become 70, and follow-up reduction training number of times, the epoch adjustment is 4, and the training time spent reduces to every group 3 min. When multiple threads are used for load prediction, the prediction time of a single thread is influenced. Through tests, the efficiency is the highest when the number of the threads is 33, and the average time of subsequent training is 30 s. And distributing the distribution transformation data to each thread to perform load prediction, wherein the network structure is defined in each thread only once. The average time of the follow-up training after improvement is 14s, but the time of the follow-up training is obviously increased due to the increase of the memory occupation. After training for a certain number of times, the thread is closed and opened again, so that the memory is released, and the time consumption is prevented from increasing. The average time of subsequent training after improvement is 11 s.
And defining internal and external global variables to uniformly read and process historical data, and splicing and uniformly writing the prediction result, so that the locking during reading and writing the database is avoided. The average time of the subsequent training after improvement is 4 s. Through improvement, the average time of each group of distribution transformer subsequent training is 4s, the method can be applied to load prediction of thousands of levels, and daily calculation tasks can be completed within 12 hours. The average time-use ratio of each group can be improved by more than 200 times when the time-use ratio is in daily operation as shown in figure 6.
Note that epoch is used to indicate the number of training sessions.
It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided an electrical load prediction apparatus of an LSTM neural network, including:
the data obtaining unit 710 is configured to obtain a local hyper-parameter and load data of each electrical device within a preset time period.
And the data set dividing unit 720 is configured to classify the load data of the electrical equipment into a plurality of load data sets based on the identity information of the load data of the electrical equipment.
The thread starting unit 730 is configured to start a preset number of threads, and read corresponding load data sets based on the threads respectively.
And the load prediction unit 740 is configured to obtain power load prediction data by processing the corresponding load data set and the local hyperparameters respectively through each thread based on a pre-established LSTM neural network model.
For specific limitations of the power load prediction apparatus of the LSTM neural network, reference may be made to the above limitations of the power load prediction method of the LSTM neural network, and details thereof are not repeated here. The various modules in the electrical load prediction apparatus of the LSTM neural network described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a processor in the power load prediction system of the LSTM neural network or independent of the processor in the LSTM neural network in a hardware form, or can be stored in a memory in the power load prediction system of the LSTM neural network in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, as shown in fig. 8, there is provided a power load prediction system of an LSTM neural network, including a memory and a processor, the memory storing a computer program, and the processor implementing the steps of any of the above power load prediction methods of the LSTM neural network when executing the computer program.
Wherein the processor is operable to perform the steps of:
acquiring local super parameters and load data of each power device in a preset time period;
classifying the load data of each power device into a plurality of load data sets based on the identity information of the corresponding load data of the power device;
starting a preset number of threads, and respectively reading corresponding load data sets based on the threads;
and respectively processing the corresponding load data sets and local super parameters through each thread based on a pre-established LSTM neural network model to obtain power load prediction data.
In one embodiment, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of power load prediction of an LSTM neural network of any of the above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the division methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A power load prediction method based on an LSTM neural network is characterized by comprising the following steps:
acquiring local super parameters and load data of each power device in a preset time period;
classifying each of the electrical equipment load data into a number of load data sets based on identity information corresponding to the electrical equipment load data;
starting a preset number of threads, and respectively reading the corresponding load data sets based on the threads;
and respectively processing the corresponding load data set and the local hyperparameter through each thread based on a pre-established LSTM neural network model to obtain power load prediction data.
2. The method of claim 1, wherein the step of obtaining local hyper-parameters comprises:
inquiring local hyper-parameters of the corresponding power equipment serial number file in the database to which the power equipment belongs;
and reading the local hyper-parameter corresponding to the power equipment number file according to the query result.
3. The method of claim 1, wherein the step of reading the corresponding load data sets based on each of the threads comprises:
respectively carrying out data preprocessing on each load data set to obtain each preprocessed load data set;
and respectively reading the corresponding preprocessed load data sets on the basis of the threads.
4. The method for predicting the power load of the LSTM neural network according to claim 1, wherein the step of obtaining the power load prediction data by processing the corresponding load data set and the local hyper-parameter through each thread based on the previously built LSTM neural network model comprises:
training the local hyper-parameter based on the LSTM neural network model to obtain a trained hyper-parameter;
and updating the local hyper-parameter into the post-training hyper-parameter.
5. The method of predicting the electrical load of an LSTM neural network as set forth in claim 1, further comprising the steps of:
after the thread processes the load data sets with preset number, closing the thread; and the thread is restarted for the next round of processing.
6. The method for predicting the power load of the LSTM neural network according to claim 1, wherein the step of obtaining the power load prediction data by processing the corresponding load data set and the local hyper-parameter through each thread based on the previously built LSTM neural network model further includes:
establishing a global array corresponding to the thread;
and caching the power load prediction data corresponding to the threads in the corresponding global array.
7. The method of any of claims 1 to 6, wherein the load data set comprises at least one of the electrical load data.
8. An apparatus for predicting a power load of an LSTM neural network, comprising:
the data acquisition unit is used for acquiring local hyper-parameters and load data of each power device in a preset time period;
the data set dividing unit is used for classifying the load data of the electric power equipment into a plurality of load data sets based on the identity information corresponding to the load data of the electric power equipment;
the thread starting unit is used for starting threads with preset number and respectively reading the corresponding load data sets based on the threads;
and the load prediction unit is used for respectively processing the corresponding load data set and the local hyper-parameter through each thread based on a preset LSTM neural network model to obtain power load prediction data.
9. A power load prediction system of an LSTM neural network comprising a memory and a processor, the memory storing a computer program, characterized in that the processor when executing the computer program implements the steps of the power load prediction method of the LSTM neural network according to any one of claims 1 to 7.
10. A computer readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, realizes the steps of the power load prediction method of the LSTM neural network according to any one of claims 1 to 7.
CN201911172170.7A 2019-11-26 2019-11-26 Power load prediction method, device and system based on LSTM neural network Active CN110909941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911172170.7A CN110909941B (en) 2019-11-26 2019-11-26 Power load prediction method, device and system based on LSTM neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911172170.7A CN110909941B (en) 2019-11-26 2019-11-26 Power load prediction method, device and system based on LSTM neural network

Publications (2)

Publication Number Publication Date
CN110909941A true CN110909941A (en) 2020-03-24
CN110909941B CN110909941B (en) 2022-08-02

Family

ID=69819508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911172170.7A Active CN110909941B (en) 2019-11-26 2019-11-26 Power load prediction method, device and system based on LSTM neural network

Country Status (1)

Country Link
CN (1) CN110909941B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488404A (en) * 2020-12-07 2021-03-12 广西电网有限责任公司电力科学研究院 Multithreading efficient prediction method and system for large-scale power load of power distribution network
CN113659565A (en) * 2021-07-19 2021-11-16 华北电力大学 Online prediction method for frequency situation of new energy power system
CN114065653A (en) * 2022-01-17 2022-02-18 南方电网数字电网研究院有限公司 Construction method of power load prediction model and power load prediction method
CN114881343A (en) * 2022-05-18 2022-08-09 清华大学 Short-term load prediction method and device of power system based on feature selection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932671A (en) * 2018-06-06 2018-12-04 上海电力学院 A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune
CN109034500A (en) * 2018-09-04 2018-12-18 湘潭大学 A kind of mid-term electric load forecasting method of multiple timings collaboration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932671A (en) * 2018-06-06 2018-12-04 上海电力学院 A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune
CN109034500A (en) * 2018-09-04 2018-12-18 湘潭大学 A kind of mid-term electric load forecasting method of multiple timings collaboration

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488404A (en) * 2020-12-07 2021-03-12 广西电网有限责任公司电力科学研究院 Multithreading efficient prediction method and system for large-scale power load of power distribution network
CN113659565A (en) * 2021-07-19 2021-11-16 华北电力大学 Online prediction method for frequency situation of new energy power system
CN114065653A (en) * 2022-01-17 2022-02-18 南方电网数字电网研究院有限公司 Construction method of power load prediction model and power load prediction method
CN114881343A (en) * 2022-05-18 2022-08-09 清华大学 Short-term load prediction method and device of power system based on feature selection
CN114881343B (en) * 2022-05-18 2023-11-14 清华大学 Short-term load prediction method and device for power system based on feature selection

Also Published As

Publication number Publication date
CN110909941B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN110909941B (en) Power load prediction method, device and system based on LSTM neural network
CN112150237B (en) Multi-model fused order overdue early warning method, device, equipment and storage medium
CN111680841B (en) Short-term load prediction method, system and terminal equipment based on principal component analysis
CN116186548B (en) Power load prediction model training method and power load prediction method
CN113052389A (en) Distributed photovoltaic power station ultra-short-term power prediction method and system based on multiple tasks
CN116302898A (en) Task management method and device, storage medium and electronic equipment
CN114021425B (en) Power system operation data modeling and feature selection method and device, electronic equipment and storage medium
CN113592064A (en) Ring polishing machine process parameter prediction method, system, application, terminal and medium
CN112488404A (en) Multithreading efficient prediction method and system for large-scale power load of power distribution network
CN115528750B (en) Power grid safety and stability oriented data model hybrid drive unit combination method
CN111783827A (en) Enterprise user classification method and device based on load data
CN109766181A (en) A kind of RMS schedulability determination method and device based on deep learning
CN114844058A (en) Scheduling method for offshore wind power energy storage participating in peak shaving auxiliary service
CN114254762A (en) Interpretable machine learning model construction method and device and computer equipment
Koo et al. Sahws: Iot-enabled workflow scheduler for next-generation hadoop cluster
CN108009668B (en) Large-scale load adjustment prediction method applying machine learning
CN105224389A (en) The virtual machine resource integration method of theory of casing based on linear dependence and segmenting
CN109739638A (en) A kind of EDF schedulability determination method and device based on deep learning
CN115186887A (en) Multithreading parallel computing power load prediction method based on LSTM
CN112801372B (en) Data processing method, device, electronic equipment and readable storage medium
CN113010917B (en) Loss reduction analysis processing method with privacy protection for contemporaneous line loss management system
CN117829380B (en) Method, system, equipment and medium for long-term prediction of power use
Liu et al. Predicting Active Constraints Set in Security-Constrained Optimal Power Flow via Deep Neural Network
CN116231763B (en) Household energy management system optimization scheduling method and device with self-learning capability
Elmoukhtafi et al. Data-driven solutions for electricity price forecasting: The case of EU improvement project

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: 510620 Tianhe District, Guangzhou, Tianhe South Road, No. two, No. 2, No.

Applicant after: Guangzhou Power Supply Bureau of Guangdong Power Grid Co.,Ltd.

Address before: 510620 Tianhe District, Guangzhou, Tianhe South Road, No. two, No. 2, No.

Applicant before: GUANGZHOU POWER SUPPLY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant