CN116307159A - Load prediction method and device, electronic equipment and storage medium - Google Patents
Load prediction method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116307159A CN116307159A CN202310229294.4A CN202310229294A CN116307159A CN 116307159 A CN116307159 A CN 116307159A CN 202310229294 A CN202310229294 A CN 202310229294A CN 116307159 A CN116307159 A CN 116307159A
- Authority
- CN
- China
- Prior art keywords
- load data
- determining
- original
- load
- original load
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000005611 electricity Effects 0.000 claims abstract description 45
- 230000008859 change Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 16
- 230000015654 memory Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000010801 machine learning Methods 0.000 abstract description 9
- 238000013528 artificial neural network Methods 0.000 description 14
- 125000004122 cyclic group Chemical group 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 230000000306 recurrent effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000001932 seasonal effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000002679 ablation Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for AC mains or AC distribution networks
- H02J3/003—Load forecast, e.g. methods or systems for forecasting future load demand
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- General Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Tourism & Hospitality (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Power Engineering (AREA)
- Primary Health Care (AREA)
- Development Economics (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a load prediction method, a load prediction device, electronic equipment and a storage medium. The method specifically comprises the following steps: acquiring original load data of a target user; determining at least one influencing feature of the original load data according to the original load data; and determining the prediction result of the electricity load of the target user according to each influence characteristic. According to the technical scheme, the original load data are analyzed to obtain the characteristics of different dimensions, and the electricity load of the target user is predicted according to the characteristics. The characteristics of different dimensions have different influences on the electricity consumption of the target user, the characteristics are extracted from the original load sequence, the influence of the characteristics on the electricity consumption is determined through a machine learning model, the load of the user can be effectively predicted, and the accuracy of load prediction is improved.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a load prediction method, a load prediction device, an electronic device, and a storage medium.
Background
With the development of the power industry, the power system needs to be charged with more complicated power supply. In recent years, due to the increasing development of various emerging industries and power systems, various aspects of social life have a trend of increasing the demand for electric power, and the problem of inconsistent supply and demand of electric power has been increased. The future load data is timely and accurately predicted, so that the orderly development of electricity utilization work is facilitated, the notch index of orderly electricity utilization is ensured to be met, and the social influence degree of orderly electricity utilization is effectively reduced.
Currently, for load prediction of an electric power system, conventional statistics include a regression analysis method, a state space method, a time series method, and the like. However, the traditional statistics can only analyze the original load data singly, summarize and generalize the rules, and cannot consider other factors affecting the original load data, so that the accuracy of the prediction result is poor.
Disclosure of Invention
The application provides a load prediction method, a load prediction device, electronic equipment and a storage medium, so as to improve the accuracy of load prediction.
According to an aspect of the present application, there is provided a load prediction method, the method including:
acquiring original load data of a target user;
determining at least one influencing feature of the original load data according to the original load data;
and determining the prediction result of the electricity load of the target user according to each influence characteristic.
According to another aspect of the present application, there is provided a load predicting apparatus including:
the original data acquisition module is used for acquiring original load data of a target user;
the influence characteristic determining module is used for determining at least one influence characteristic of the original load data according to the original load data;
and the prediction result determining module is used for determining the prediction result of the electricity load of the target user according to each influence characteristic.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the load prediction method of any one of the embodiments of the present application.
According to another aspect of the present application, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute a load prediction method according to any embodiment of the present application.
According to the technical scheme, the original load data are analyzed to obtain the characteristics of different dimensions, and the electricity load of the target user is predicted according to the characteristics. The characteristics of different dimensions have different influences on the electricity consumption of the target user, the characteristics are extracted from the original load sequence, the influence of the characteristics on the electricity consumption is determined through a machine learning model, the load of the user can be effectively predicted, and the accuracy of load prediction is improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a load prediction method according to a first embodiment of the present application;
FIG. 2A is a diagram of a model architecture for power system load prediction applicable in accordance with a second embodiment of the present application;
FIG. 2B is a schematic diagram of a multidimensional signature attention network adapted according to a second embodiment of the present application;
fig. 3 is a schematic structural view of a load predicting device according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device implementing the load prediction method according to the embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a load prediction method according to an embodiment of the present application, where the method may be applied to a case where a power system predicts a power load of a user, and the method may be performed by a load prediction device, where the load prediction device may be implemented in a form of hardware and/or software, and the load prediction device may be configured in an electronic device. As shown in fig. 1, the method includes:
s110, acquiring original load data of a target user.
The target user is a power user who needs to predict future power load, and the original load data may be data of power load conditions of the target user in a certain past time period. The raw load data may be in tabular form, such as CSV format; the raw load data may be data including multidimensional information, such as power usage (load) per day in a historical period, date of day, weather, environment, etc. may be included in a table. It can be understood that the prediction of the load of the user by the power system is necessarily to predict the load which does not occur in the future, and the accuracy of the prediction of the load in the future can be improved by acquiring and analyzing the original load data of the target user in the history period. The original load data can be directly obtained from the database because of the historical electricity consumption condition.
S120, determining at least one influence characteristic of the original load data according to the original load data.
The influencing characteristic may represent a characteristic factor of the target user power consumption included in the original load data, which is reflected in time sequence, for example, may include periodicity, trend, holiday and the like of the power consumption. The periodicity of electricity consumption can be a characteristic factor of the original load data about the time period change of the electricity consumption. The trend may be a trend between peaks and valleys of the power usage during a day, i.e. when the power usage is higher and when the power usage is lower during a day. Holidays as date attributes in the time series can also be embodied in the original load data, and the electricity consumption during the holidays of the user can be greatly different from that of the common workdays. By analyzing the original load sequence, the factors with the influence characteristics are extracted, so that various different characteristics of the target user during electricity utilization are analyzed, and future electricity utilization loads of the target user are predicted on the basis of the influence factors with different dimensions. Of course, the analysis method may be any method in the related art, and the embodiment of the application is not limited, and for example, the machine learning network model may be used to analyze and process the original load sequence of the user.
In an alternative embodiment, the impact feature includes a load change feature, and the determining at least one impact feature of the original load data according to the original load data may include: determining differential characteristics of the original load data according to an original load sequence corresponding to the original load data and a preset first network model; and determining the load change characteristics of the original load data according to the differential characteristics and the original load sequence.
Wherein the original payload sequence may be in the form of a sequence of original payload data. For example, the original load data of the target user for each day in the past 12 months is sorted by date and time to form an original load sequence, and of course, the history may be data in which the electricity load conditions in one day are sorted by time sequence in one day. According to different conditions of load prediction, the acquisition of the original load sequence is correspondingly different. The prediction of the electricity load of the target user can be subdivided into one-day electricity load prediction, one-month electricity load prediction, one-quarter electricity load prediction and the like if the prediction is performed according to the time dimension, and the electricity data of different users are selected to be processed according to different targets of the electricity load prediction. And after the electricity consumption data of the user in the set historical time period is obtained from the database, preprocessing the electricity consumption data. The method of preprocessing may include, but is not limited to, STL (Seasonal and Trend decomposition using Loess, time series decomposition method) and the like, and the raw load data is analyzed by the preprocessing method to obtain the aforementioned raw load sequence.
The load change characteristics can represent dynamic change information in original load data of electricity consumption of a target user, and are mainly represented by change conditions of electricity consumption of the user in a preset historical event section. The first network model may be a machine learning network model for load change profiling, alternatively a differential learning network. In general, a differential learning network may be composed of two recurrent neural networks. And training and analyzing the original load sequence through a first cyclic neural network of the differential learning network, and outputting a result which is the differential characteristic of the original load data. And inputting the differential characteristic and the original load sequence into a second cyclic neural network, and outputting the load change characteristic corresponding to the original load data.
The determining the differential feature of the original load data according to the original load sequence corresponding to the original load data and a preset first network model may include: and determining differential characteristics according to the differential sequence of the original load sequence and the differential learning network.
That is, before the differential learning network is used to determine the differential characteristics and the load variation characteristics, a differential sequence is determined according to the original load sequence and is used as the input of the first cyclic neural network in the differential learning network. The differential sequence can be obtained by subtracting the previous element from the next element from the adjacent element in the original load sequence, and all the differential values are arranged in sequence. It can be understood that the differential sequence of the original load sequence can obtain the differential characteristics of the differential sequence through the first cyclic neural network in the differential learning network; taking the differential characteristics and the original load data as input, and outputting the load change characteristics corresponding to the original load data through a second cyclic neural network of the differential learning network.
In another alternative embodiment, the influencing feature comprises an external factor fusion feature; the determining at least one influencing feature of the original load data according to the original load data may further include: determining the relevant attention weight among all influence features in the original load data according to the original load data and a preset second network model; and determining the fusion characteristics of external factors according to the attention weight.
Wherein the attention weights are used to characterize the correlation between features. The second network model may be a schematic network for analyzing the influence of different features on the power consumption. For example, the electricity consumption of the target user is taken as a central node in the graph attention network, different features are taken as neighbor nodes of the central node, the correlation of the different features such as environment, weather, date and the like and the electricity consumption is calculated through an inherent algorithm of the graph attention network, and the correlation is quantified through the attention weight. And according to the attention weights corresponding to the different features, carrying out weighted summation on the features of all the neighbor nodes based on the attention weights, and obtaining the result as the external factor fusion feature. Of course, the calculation result may be in the form of a matrix, which is suitably trained and calculated in a machine learning model.
Optionally, the second network model is a graph annotation force network; the determining the attention weight of the original load data according to the original load data and a preset second network model may include: determining relevant attention coefficients among all influence features in the original load data according to the original load data and the graph attention network; an attention weight is determined from the attention coefficients.
The attention coefficient of the original load data may be a similarity coefficient between different features and the power consumption in the original load data calculated by the network after the original load data is input into the attention network of the graph. And splicing the characteristics after the transformation of the central node and the neighbor nodes, embedding the spliced characteristics into a learnable weight vector, and performing dot product operation to obtain a result which can be used as the attention weight.
In yet another alternative embodiment, the influencing feature comprises a timing feature; the determining at least one influencing feature of the original load data according to the original load data may further include: and determining the time sequence characteristics according to the original load data and a preset time sequence convolution network.
It should be noted that, because the conventional convolutional neural network cannot effectively capture the dependency relationship between features in a longer time sequence due to the limitation of the convolutional kernel size, it is not suitable for modeling of a long time sequence. To overcome this problem, embodiments of the present application introduce a time-series convolution network by which the time trend of each time node is captured. Of course, the time node may be set by a related technician according to specific situations, for example, may be set to be calculated by day, or may be set to be calculated by hour, minute, etc.; likewise, the foregoing time-series convolution network may be set according to an existing time-series convolution network, and parameters of the network model may be adjusted based on different requirements, which is not limited in the embodiment of the present application.
S130, determining a prediction result of the electricity load of the target user according to each influence characteristic.
And determining different characteristics in the previous steps, wherein the different characteristics at least comprise a load change characteristic, an external factor fusion characteristic, a time sequence characteristic and the like. It should be specifically noted that, there is no sequence between the aforementioned determinations of different features (i.e., the load change feature, the external factor fusion feature, and the timing feature), and the execution sequence thereof does not affect the final prediction result.
Optionally, the determining the prediction result of the electricity load of the target user according to each influence characteristic may include: and determining a prediction result according to each influence characteristic and a preset convolutional neural network.
Specifically, different characteristic values are spliced during output, and dimension reduction of the characteristic is performed through one convolution to output a predicted value. The preset convolutional neural network can adopt any convolutional application network model in the related technology, and specific model parameters can be set by related technicians according to specific conditions.
According to the technical scheme, the original load data are analyzed to obtain the characteristics of different dimensions, and the electricity load of the target user is predicted according to the characteristics. The characteristics of different dimensions have different influences on the electricity consumption of the target user, the characteristics are extracted from the original load sequence, the influence of the characteristics on the electricity consumption is determined through a machine learning model, the load of the user can be effectively predicted, and the accuracy of load prediction is improved.
Optionally, after determining the prediction result of the electricity load of the target user, the method may further include: and determining at least one error index of the predicted result according to the predicted result and the original load data.
The error indicator may be an error value for evaluating accuracy of the prediction result, and may include, but is not limited to, MAE (Mean Absolute Error ), MSE (Mean Square Error, mean square error), MAPE (Mean Absolute Percentage Error ), and the like. And comparing the predicted result with the original load data in the historical period, calculating a plurality of error indexes, and evaluating the predicted result from different index dimensions to determine the accuracy of the predicted result. It will be appreciated that the raw load data for the historical period may be flexibly invoked during model training. For example, training is performed by using the original load data of the last year, and the prediction result output by the model is compared with the original load data of the last year, so that the accuracy of the model is verified. By the method, errors of the prediction result and the actual situation can be reflected, and relevant technicians can be helped to adjust the model, so that the accuracy of power load prediction is further improved.
Example two
Fig. 2A is a model architecture diagram for power system load prediction according to a second embodiment of the present application, which is a preferred embodiment provided on the basis of the foregoing embodiments. As shown in fig. 2A, specifically includes:
the multi-dimensional time sequence data of the user is obtained and preprocessed, the preprocessing comprises the steps of extracting the characteristics of periodicity, trending, holidays and the like of a load sequence of the load data by adopting an STL decomposition method, and the normalization of the data by adopting MinMaxScaler (), so that the calculated amount of a model is effectively reduced by the normalized data.
Wherein,,representing the normalized data; max (X) train ) Represents the maximum value, min (X), in the training set (corresponding to the set of raw load data) train ) Representing the minimum value in the training set. And finishing all the normalized data to obtain an original load sequence corresponding to the original load data.
The conventional convolutional neural network cannot well capture the long-time dependency relationship due to the limitation of the convolutional kernel size, and is generally not suitable for modeling of a long-time sequence, so the embodiment of the application adopts a time sequence convolutional network to capture the time trend of each time node, and meanwhile constructs a differential learning network to learn the dynamic change characteristic (the load change characteristic in the embodiment) of the load data. The differential learning network is divided into two cyclic neural networks, firstly, in the first cyclic neural network, the adjacent source is passed throughDifferential sequence is determined by the difference value of initial load data, differential characteristics of the initial load data are calculated through the inherent state updating process of the cyclic neural network, and the initial load sequence X= { X 1 ,x 2 ,…,x T Where T represents the time step. Differential sequenceThe calculation method of (2) is as follows:
namely, the difference value of front and rear data in the original load sequence forms a differential sequence, and a first cyclic application network is input for learning to obtain a differential characteristic; the learned differential features are then usedThe input quantity is the same as the original load sequence, and is input into a second cyclic neural network of the differential learning model, and the state conversion process is calculated as follows:
wherein z is t Is an update gate in a recurrent neural network; r is (r) t Is a reset gate in a recurrent neural network; w is a weight matrix in the recurrent neural network, correspondingly, W z Is to update the weight matrix corresponding to the gate, W r Is a weight matrix corresponding to the reset gate; σ is the activation function of the recurrent neural network.
Differential characterization of raw load dataCoding as->The load change characteristic of the original load data calculated by using the differential learning network is coded as H= { H 1 ,h 2 ,…,h T }。
And adopting GAT (Graph Attention), and carrying out space modeling among the multidimensional characteristic variables of the load data. For example, the electricity consumption in the original load data may be used as a central node, and the date, environment, weather, and other features in the original load data may be used as neighbor nodes of the electricity consumption. In a multidimensional feature graph attention network, for node i, each of its neighbor nodes j is computed one by oneAnd the attention coefficient between i->
The above formula calculates the attention coefficient between pairs of nodes. Firstly, splicing the characteristics of the nodes i and j after transformation, then performing dot product on the spliced embedded and a learnable weight vector alpha, and finally normalizing the attention coefficient to obtain the attention weight
{a 12 ,a 13 ,…,a 1m Other features to featuresRepresenting the correlation dependence between features, as shown in fig. 2B. The output of the multidimensional signature attention network is an n x m matrix, n representing the time step and m representing the total number of nodes.
After the attention weight is obtained, the weighted summation based on the attention is carried out on the characteristics of all the adjacent nodes:
it is the new feature of the GAT output that merges the neighboring node information for each node i, where σ is the activation function. The output of the multidimensional signature attention network is an n x m matrix, n representing the time step and m representing the total number of nodes.
And splicing the outputs of the time sequence convolution network, the differential learning network and the multidimensional feature graph attention network, and carrying out feature dimension reduction through one convolution to output a load prediction result. Of course, the calculated amount is larger due to the higher dimension of the output value of the graph attention network, and the dimension of the output value of the graph attention network can be reduced once before the input convolution, and then the output value of the graph attention network and the output of the differential learning network are combined for the input convolution. In the model training process, error indexes such as average absolute error (Mean Absolute Error, MAE) and the like are used as training targets to reflect the actual situation of the predicted value error:
wherein,,representing the predicted value, y, of the model i Representing the actual value, n represents the number.
In general, the machine learning model described above is composed in whole of three parts: a time series convolution network, a differential learning network and a multidimensional feature map attention network. The time sequence convolution network is used for modeling the time dependency relationship of the load data long sequence, the differential learning network learns the dynamic change characteristics of the load data, and the multidimensional characteristic graph attention network extracts the spatial characteristics among multidimensional characteristic variables. And the electric load is predicted through different characteristics, so that the prediction accuracy is improved.
In addition, the load prediction model (abbreviated as tcn_dlgat) based on the time sequence convolution network, the differential learning network and the multidimensional feature graph attention network in the embodiment of the application has better performance after experiments and comparison. As shown in table 1, tcn_dlgat is compared with LSTM (Long short-term memory model), cnn_lstm (Convolutional Neural Networks _lstm, convolutional neural network-Long-term memory model), LSTNet (LSTM model combined with time attention mechanism), transducer (transducer model), TCN (Temporal Convolutional Network, time convolutional network), and the like from error indexes such as MAE, MSE, and MAPE.
TCN_DLGAT performs much better than the recurrent neural network models, including LSTM, CNN_LSTM, with MAE, MSE, and MAPE all lower than these models. And also superior to attention mechanism based models (LSTNet and transducer). Compared to time-convolution networks, the time-convolution networks do not behave as well as tcn_dlgat, since they only increase the receptive field, and do not learn the spatial correlation between the dynamically changing features and other influencing factors between the raw load data.
Furthermore, ablation experiments and comparisons were performed on the basis of the tcn_dlgat model proposed in the examples of the present application. That is, the effectiveness of each network for different dimensional feature processing of the raw load data is verified by disabling the differential learning network, the time series convolution network, and the multidimensional feature map attention network, respectively. That is, the complete tcn_dlgat model is compared to the three tcn_gat, tcn_dl and DLGAT models.
TABLE 1
As shown in table 2, in predicting MAE, MSE and MAPE at 96 preset time points in the future day, each of MAE, MSE and MAPE of tcn_dlgat is lower than tcn_gat, which illustrates the importance of the dynamic change feature learned by the differential feature of the load data in the differential learning network to the model prediction result. The results of tcn_dl demonstrate that multidimensional signature graph attention networks are relatively adept at capturing holidays, seasonal, and the like, and that spatial correlation between these different features can be reflected by the graph attention network. Meanwhile, the DLGAT omits a time sequence convolution network which can capture the characteristics of a long time sequence, so that the model can not capture the characteristics of the whole load data. It is also important to learn the time dependence between the payload data using a time-sequential convolutional network. In conclusion, the accuracy of the load prediction model with the differential learning network, the time sequence convolution network and the multidimensional feature map attention network is excellent.
TABLE 2
Example III
Fig. 3 is a schematic structural diagram of a load prediction device according to a third embodiment of the present application. As shown in fig. 3, the load prediction apparatus 300 includes:
an original data obtaining module 310, configured to obtain original load data of a target user;
an influence characteristic determining module 320, configured to determine at least one influence characteristic of the original load data according to an original load sequence corresponding to the original data;
the prediction result determining module 330 is configured to determine a prediction result of the electricity load of the target user according to each influence feature.
According to the technical scheme, the original load data are analyzed to obtain the characteristics of different dimensions, and the electricity load of the target user is predicted according to the characteristics. The characteristics of different dimensions have different influences on the electricity consumption of the target user, the characteristics are extracted from the original load sequence, the influence of the characteristics on the electricity consumption is determined through a machine learning model, the load of the user can be effectively predicted, and the accuracy of load prediction is improved.
In an alternative embodiment, the influencing feature comprises a load change feature; the influence feature determination module 320 may include:
the differential feature determining unit is used for determining differential features of the original load data according to an original load sequence corresponding to the original load data and a preset first network model;
and a load change determining unit for determining a load change characteristic of the original load data based on the differential characteristic and the original load data.
Alternatively, the first network model may be a differential learning network.
In another alternative embodiment, the influencing feature comprises an external factor fusion feature; the influence feature determining module 320 may further include:
the attention weight determining unit is used for determining the relevant attention weight among all influence features in the original load data according to the original load data and a preset second network model;
and the fusion characteristic determining unit is used for determining the fusion characteristic of the external factors according to the attention weight.
Optionally, the second network model is a graph annotation force network; the attention weight determination unit may include:
the attention coefficient determination subunit is used for determining the relevant attention coefficient among all the influence features in the original load data according to the original load data and the attention network;
and the weight determination subunit is used for determining the attention weight according to the attention coefficient.
In yet another alternative embodiment, the influencing feature comprises a timing feature; the influence feature determining module 320 may further include:
the time sequence feature determining unit is used for determining time sequence features according to the original load data and a preset time sequence convolution network.
In an alternative embodiment, the prediction result determining module 330 may be specifically configured to: and determining a prediction result according to each influence characteristic and a preset convolutional neural network.
The load prediction device provided by the embodiment of the application can execute the load prediction method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the load prediction methods.
Example IV
Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the load prediction method.
In some embodiments, the load prediction method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the load prediction method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the load prediction method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solutions of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.
Claims (10)
1. A method of load prediction, the method comprising:
acquiring original load data of a target user;
determining at least one influencing feature of the original load data according to the original load data;
and determining a prediction result of the electricity load of the target user according to each influence characteristic.
2. The method of claim 1, wherein the influencing feature comprises a load change feature; said determining at least one impact characteristic of said raw load data from said raw load data comprises:
determining differential characteristics of the original load data according to an original load sequence corresponding to the original load data and a preset first network model;
and determining the load change characteristics of the original load data according to the differential characteristics and the original load data.
3. The method of claim 2, wherein the first network model is a differential learning network.
4. The method of claim 1, wherein the influencing feature comprises an external factor fusion feature; the determining at least one influencing feature of the original load data according to the original load data further comprises:
determining the relevant attention weight among all influence features in the original load data according to the original load data and a preset second network model;
and determining the external factor fusion characteristic according to the attention weight.
5. The method of claim 4, wherein the second network model is a graph-annotation network; the determining the attention weight of the original load data according to the original load data and a preset second network model comprises the following steps:
determining relevant attention weights among all influence features in the original load data according to the original load data and the graph annotation meaning network;
and determining the attention weight according to the attention coefficient.
6. The method of claim 1, wherein the influencing feature comprises a timing feature; the determining at least one influencing feature of the original load data according to the original load data further comprises:
and determining the time sequence characteristics according to the original load data and a preset time sequence convolution network.
7. The method of claim 1, wherein determining a predicted outcome of the electrical load of the target user based on each impact characteristic comprises:
and determining the prediction result according to the influence characteristics and a preset convolutional neural network.
8. A load predicting apparatus, comprising:
the original data acquisition module is used for acquiring original load data of a target user;
an influence characteristic determining module, configured to determine at least one influence characteristic of the original load data according to the original load data;
and the prediction result determining module is used for determining the prediction result of the electricity load of the target user according to each influence characteristic.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the load prediction method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the load prediction method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310229294.4A CN116307159A (en) | 2023-03-09 | 2023-03-09 | Load prediction method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310229294.4A CN116307159A (en) | 2023-03-09 | 2023-03-09 | Load prediction method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116307159A true CN116307159A (en) | 2023-06-23 |
Family
ID=86799045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310229294.4A Pending CN116307159A (en) | 2023-03-09 | 2023-03-09 | Load prediction method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116307159A (en) |
-
2023
- 2023-03-09 CN CN202310229294.4A patent/CN116307159A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116307215A (en) | Load prediction method, device, equipment and storage medium of power system | |
CN114399101A (en) | TCN-BIGRU-based gas load prediction method and device | |
CN116485031A (en) | Method, device, equipment and storage medium for predicting short-term power load | |
CN117674119A (en) | Power grid operation risk assessment method, device, computer equipment and storage medium | |
CN116542362A (en) | Load prediction method and device, electronic equipment and storage medium | |
CN117455067A (en) | Electric quantity consumption prediction method and device, electronic equipment and storage medium | |
CN118014018A (en) | Building energy consumption prediction method, device, equipment and storage medium | |
CN117290818A (en) | Multi-dimensional time sequence prediction method, electronic equipment and storage medium | |
CN116307159A (en) | Load prediction method and device, electronic equipment and storage medium | |
CN116703109A (en) | Method, device, equipment and storage medium for selecting power distribution network project | |
CN115952928A (en) | Short-term power load prediction method, device, equipment and storage medium | |
CN115456168A (en) | Training method and energy consumption determination method and device for reinforcement learning model | |
CN116258167A (en) | Data detection method, device, electronic equipment and medium | |
CN115146997A (en) | Evaluation method and device based on power data, electronic equipment and storage medium | |
CN117094452B (en) | Drought state prediction method, and training method and device of drought state prediction model | |
CN118467403B (en) | Database detection method, device, equipment and storage medium | |
CN117934137A (en) | Bad asset recovery prediction method, device and equipment based on model fusion | |
CN117667587A (en) | Abnormality detection method and device, electronic equipment and storage medium | |
CN117634437A (en) | Method and device for describing current running state of equipment and electronic equipment | |
CN118212033A (en) | Data processing method, device, equipment and storage medium | |
CN118734205A (en) | Wind speed prediction method, device, equipment and medium | |
CN117454313A (en) | Multi-source heterogeneous data fusion method and device, electronic equipment and storage medium | |
CN117217777A (en) | Evaluation method, device, equipment and medium based on contrast learning | |
CN118316023A (en) | Regional wind power probability prediction method and device, electronic equipment and storage medium | |
CN117251809A (en) | Power grid time sequence data anomaly detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |