CN117709522A - Multi-factor fusion medium-and-long-term power load multi-task learning prediction method - Google Patents

Multi-factor fusion medium-and-long-term power load multi-task learning prediction method Download PDF

Info

Publication number
CN117709522A
CN117709522A CN202311659104.9A CN202311659104A CN117709522A CN 117709522 A CN117709522 A CN 117709522A CN 202311659104 A CN202311659104 A CN 202311659104A CN 117709522 A CN117709522 A CN 117709522A
Authority
CN
China
Prior art keywords
auxiliary
layer
load
convolution
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311659104.9A
Other languages
Chinese (zh)
Inventor
钱晓瑞
詹祥澎
肖恺
林女贵
洪华伟
朱玲玲
沈一民
陈旭鹏
陈筱珺
潘舒宸
游妮萍
游元通
李灿辉
唐敏燕
吴鹏
张煜
谭显东
孙毅
张叙航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Energy Research Institute Co Ltd
North China Electric Power University
State Grid Fujian Electric Power Co Ltd
Marketing Service Center of State Grid Fujian Electric Power Co Ltd
Original Assignee
State Grid Energy Research Institute Co Ltd
North China Electric Power University
State Grid Fujian Electric Power Co Ltd
Marketing Service Center of State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Energy Research Institute Co Ltd, North China Electric Power University, State Grid Fujian Electric Power Co Ltd, Marketing Service Center of State Grid Fujian Electric Power Co Ltd filed Critical State Grid Energy Research Institute Co Ltd
Priority to CN202311659104.9A priority Critical patent/CN117709522A/en
Publication of CN117709522A publication Critical patent/CN117709522A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a multi-factor fusion medium-and-long-term power load multi-task learning prediction method, which comprises the following steps: acquiring historical load data and load influence data as input data of a prediction model to obtain a load prediction result; the prediction model comprises a main network and an auxiliary network; the main network comprises a first encoding block, a first decoding block and a second decoding block; the auxiliary network includes a feature extraction block, an auxiliary encoding block, and an auxiliary decoding block.

Description

Multi-factor fusion medium-and-long-term power load multi-task learning prediction method
Technical Field
The invention relates to a multi-factor fusion medium-and-long-term power load multi-task learning prediction method, and belongs to the field of load prediction.
Background
The power load is influenced by economic policy trend, meteorological change, holidays, consumption level and other factors, the short-term shows volatility and even mutability, and the medium-term and long-term can be influenced by a plurality of unpredictable factors. The medium-long term power load prediction is a complex nonlinear system influenced by various factors, but many medium-long term power load prediction methods only consider one change trend of a single power factor, and under the superposition of the influence of multiple factors, a single fixed mode is difficult to accurately describe the actual complex change rule of the power load prediction. Therefore, a medium-long term load prediction method with higher accuracy is required.
Patent CN116258262a proposes a new short-term multidimensional time series data prediction model framework; the structure firstly adopts a time sequence characteristic encoder with interpretability to decompose trend and season characteristics of target sequence data. Then, an auxiliary information encoder is adopted to encode the characteristic factor data into a hidden information matrix, and a multi-head self-attention mechanism is adopted to acquire high-dimensional self-correlation characteristics. The time sequence feature encoder of the model has strong nonlinear modeling capability, and can meet the requirements of target sequence feature extraction. Finally, the extracted different types of features are fused through a feature fusion module, and final multidimensional time sequence data prediction is performed through a decoder capable of extracting time sequence features.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention designs a multi-factor fusion medium-long term power load multi-task learning prediction method.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
technical solution one
A multi-factor fusion medium-long term power load multi-task learning prediction method comprises the following steps:
acquiring historical load data and load influence data as input data of a prediction model; the prediction model comprises a main network and an auxiliary network; the main network comprises a first encoding block, a first decoding block and a second decoding block; the auxiliary network comprises a feature extraction block, an auxiliary encoding block and an auxiliary decoding block;
inputting historical load data to a main network, and performing convolution operation and maximum pooling operation on the historical load data by a first coding block to obtain a first depth characteristic; the first decoding block performs time convolution operation on the first depth feature to obtain a first decoding result, and performs nonlinear transformation on the first decoding result to obtain an intermediate predicted value;
inputting historical load data and load influence data to an auxiliary network, and performing graph convolution on the historical load data and the load influence data by a feature extraction block to obtain spatial features; the auxiliary coding block carries out convolution operation and maximum pooling operation on the space characteristics to obtain second depth characteristics; the auxiliary decoding block performs time convolution operation on the second depth feature to obtain a second decoding result, and performs nonlinear transformation on the second decoding result to obtain auxiliary predicted quantity;
and fusing the intermediate predicted value and the auxiliary predicted value, inputting the fused result into a second decoding block, performing time convolution operation on the fused result by the second decoding block to obtain a third decoding result, and performing nonlinear transformation on the third decoding result to obtain a load predicted result.
Further, the feature extraction block includes two layers of picture convolution expressed as:
f(X t ,A)=σ(ARelu(AX t W (0) )W (1) )
wherein, relu (·) and sigma (·) are respectively the activation functions of the first layer graph convolution layer and the second layer graph convolution layer; w (W) (0) And W is (1) The weight matrixes to be trained are respectively a first layer of graph convolution layer and a second layer of graph convolution layer; x is X t Is a graph node; a is an adjacency matrix.
Further, the first coding block comprises a one-dimensional convolution layer and a maximum pooling layer which are connected in sequence; the first decoding block comprises a time convolution network and a dense connection layer which are connected in sequence.
Further, the auxiliary coding block comprises two one-dimensional convolution layers and a maximum pooling layer which are connected in sequence; the auxiliary decoding block comprises two time convolution networks, a Drop layer and a dense connection layer which are connected in sequence.
Further, the intermediate and auxiliary predictors are fused by the concatate layer.
Further, the second decoding block comprises two time convolution networks, a Drop layer and a dense connection layer which are connected in sequence.
Further, respectively setting a main task training prediction model and an auxiliary task training prediction model; taking the auxiliary predicted quantity as an auxiliary task output result, and taking the load predicted result as a main task output result;
the losses of the primary and secondary tasks are weighted and summed as shown in the following equation:
Loss=1·Loss 1 +0.3·Loss 2
wherein, loss 1 And Loss of 2 Representing the loss of the primary and secondary tasks, y i Andrepresenting the actual and predicted load values, respectively.
Technical proposal II
An electronic device, comprising:
a memory for storing executable instructions;
and the processor is used for executing the executable instructions stored in the memory to realize the steps as in the technical scheme I.
Technical proposal III
A storage medium storing one or more programs executable by one or more processors to implement the steps of claim one.
Compared with the prior art, the invention has the following characteristics and beneficial effects:
the multi-factor fusion type linear and nonlinear combined prediction is carried out by constructing a prediction model which comprises a feature extraction block based on graph convolution and a plurality of decoding blocks based on time convolution, so that the accuracy of the prediction model is improved. Furthermore, in the training process, multidimensional external factors are used as auxiliary tasks, the associated characteristics among different prediction tasks are mined, the time and the computing resources required by model training are reduced through a parameter sharing mechanism, and the generalization performance of the model is improved, so that the problems of difficulty in multi-factor fusion, low training efficiency and the like in electric power and electric quantity prediction are solved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of the GCN structure;
FIG. 3 is a block diagram of TCN;
fig. 4 is a MTL-TCN framework diagram.
Detailed Description
The present invention will be described in more detail with reference to examples.
As shown in fig. 1, a multi-factor fusion method for multi-task learning and prediction of medium-and-long-term power load includes the following steps:
acquiring historical load data and load influence data such as humidity, temperature, economy and the like; preprocessing is carried out on the historical load data and the load influence data, wherein the preprocessing comprises missing value processing, abnormal value processing, normalization processing, classified data coding and the like, and the data quality is improved.
And constructing a prediction model, wherein the prediction model comprises a main network and an auxiliary network. The primary network includes a first encoded block based on convolution, a first decoded block based on temporal convolution, and a second decoded block based on temporal convolution. The auxiliary network includes a feature extraction block based on graph convolution, an auxiliary encoding block based on convolution, and an auxiliary decoding block based on temporal convolution.
Specifically, as shown in fig. 2-4, the primary network includes a first encoding block, a first decoding block, and a second decoding block; the first coding block comprises a one-dimensional convolution layer and a maximum pooling layer which are connected in sequence; the first decoding block comprises a time convolution network and a dense connection layer which are connected in sequence. The auxiliary network comprises a feature extraction block, an auxiliary encoding block and an auxiliary decoding block; the feature extraction block comprises a graph convolution layer; the auxiliary coding block comprises two one-dimensional convolution layers and a maximum pooling layer which are connected in sequence; the auxiliary decoding block comprises two time convolution networks, a Drop layer and a dense connection layer which are connected in sequence. The second decoding block comprises two time convolution networks, a Drop layer and a dense connection layer which are connected in sequence.
And inputting historical load data to a main network, extracting depth characteristics of the power load through a convolution layer and a maximum pooling layer in a first coding block, decoding the depth characteristics through a time convolution network in a first decoding block, and mapping the depth characteristics into intermediate pre-measurement by a dense connection layer in a nonlinear transformation mode.
Inputting the historical load data and the load influence data to a feature extraction block, and extracting the spatial features of the historical load data and the load influence data through graph convolution transformation; the depth features of the spatial features are further extracted through two convolution layers and a maximum pooling layer in the auxiliary coding block, the depth features are decoded through two time convolution networks of the auxiliary decoding block, and then nonlinear transformation mapping is carried out by the dense connecting layer to obtain auxiliary pre-measurement.
Fusing the intermediate predicted value output by the first decoding block and the auxiliary predicted value output by the auxiliary decoding block through a concatate layer, and outputting a fusion result to the second decoding block; and performing time convolution operation on the fusion result by the two time convolution networks of the second decoding block to obtain a third decoding result, and performing nonlinear transformation on the third decoding result by the dense connecting layer to obtain a load prediction result.
In one embodiment, historical load data and load impact data are combined into a multidimensional vector X comprising a plurality of time series t ∈R M×N
X t =[X 1 ,X 2 ,…,X N ]
Wherein N represents the number of time series; m represents the number of historical time steps in days;
X 1 ,X 2 ,…,X N the power load value, temperature, humidity, economic data, etc. at the time t of the previous M days are shown.
Multidimensional vector X t The ith time series X of (2) i Expressed by the following formula:
wherein,the observation of the ith time series at the jth day t is shown.
The graph structure data composed of the multidimensional vector and the association relationship between the learned time series is defined as a power load graph g= (X) t ,E),X t E is an edge set formed by connection among nodes; the number of nodes of the graph, i.e. timeThe number of sequences is denoted N.
The GCN network is formed by stacking graph convolution layers, and in the GCN network structure, one vertex and other node characteristics connected with the vertex can be combined through one layer of graph convolution layer to generate a new characteristic of the vertex. And the new features are used as node features in the next graph roll lamination layer to be combined until the classification layer classifies the new features. In the GCN network, as long as the characteristics of the graph structure data and the connection mode of the nodes, namely the adjacent matrix are input, the characteristics among the nodes can be effectively mined through the calculation of the multi-layer graph convolution layer, and the characteristic extraction of the power load graph is realized. The propagation formula between the layers of the graph convolution is:
wherein I is defined as an identity matrix; a is an adjacent matrix, generally a 0-1 matrix, and the size is N;is a newly generated adjacency matrix, and is used as A for calculation in the next layer network structure; />Is->The degree matrix is the number of the vertex connection nodes; h (l) Is an input characteristic of the first layer network and is an output characteristic of the upper layer; w (W) (l) Is the weight matrix of the current graph convolution layer; sigma is a nonlinear activation function.
Through continuous graph convolution calculation, aggregation of the central node characteristics including the domain node characteristics and the self node characteristics can be effectively realized. Therefore, the number of layers of the graph roll plays a decisive role in the accuracy of load prediction. Increasing the number of GCN layers enlarges the receptive field of the central node, and information can be extracted from a larger area. However, the number of layers of the GCN is not as high as possible, and the node state in the model is updated once every time one layer is added, so that the overcorrection phenomenon is caused. The feature extraction block constructed in this embodiment includes two layers of graph convolution layers, and the model expression is:
f(X t ,A)=σ(ARelu(AX t W (0) )W (1) )
wherein, relu (·) is the activation function; w (W) (0) And W is (1) The weight matrix to be trained is respectively a first layer and a second layer. Multidimensional vector X is transformed by graph convolution t ∈R M×N Mapping to spatial features X containing weather, economy, etc. information t ∈R M×T T is the sequence length of the load prediction result.
In one embodiment, as shown in fig. 3, the time convolution network (Temporal Convolutional Networks, TCN) is composed of a plurality of residual blocks, the core of which is an expansion causal convolution, and the introduction of the expansion causal convolution increases the model receptive field, and at the same time, ensures that the output of the current moment of the model only depends on the input of the current moment and the past moment, and a certain element a in the sequence has an expansion convolution calculation formula of:
wherein f= {0, …, s-1} is a convolution kernel; x is X t Is input data; d is the expansion coefficient; "x" means convolution; s is the convolution kernel size; a-dc is the calculation result of the upper layer neuron from the lower layer neuron when performing dilation convolution.
After the input data is convolved by 1 hidden layer, d is exponentially increased for 1 time, and the multi-layer convolution calculation enables the TCN to obtain a larger receptive field, so that the TCN can more accurately capture the influence relationship between the data with longer time intervals of the load input sequence. Each layer carries out convolution calculation, a convolution result is output after multi-layer convolution calculation, and a calculation formula of the receptive field r is as follows:
r=(s-1)d+1
the deeper network structure layer performs finer granularity extraction and identification on the characteristics of the input information, but the deepening of the network layer does not bring about the expected network effect, but rather the problems of fitting, gradient disappearance and the like can occur. The TCN residual error network effectively solves the problems of training errors and the like caused by the number of layers of the deep learning network, and can have larger receptive field on the premise of not increasing the calculated amount. Defining the input of a residual block as x, taking F (x) as residual, and carrying out first-layer linear calculation to obtain a final output o of a residual network as follows:
o=σ(x+F(x))
where σ (·) is the activation function.
In one embodiment, the daily load data and the weather data are considered to be the daily data, but the economic data may be the monthly or quarterly data, so when the data combination is performed, the economic data may be stacked in a large amount in one day, and the characteristic variables are put together to enter a convolution layer for training, so that the problem of poor convolution is caused, the characteristic extraction is difficult, and the model fitting is poor. Based on the characteristics of a data set, the invention selects and uses multi-task learning, provides an MTL-TCN deep learning network, and constructs a main task training prediction model and an auxiliary task training prediction model.
And taking the load influence data and the historical daily load data as input data of the auxiliary task, and obtaining auxiliary prediction quantity as an output result of the auxiliary task through the feature extraction block, the auxiliary encoding block and the auxiliary decoding block.
Taking the historical daily load data and the auxiliary predicted quantity as input data of a main task, and obtaining an intermediate predicted quantity by the historical daily load data through a first encoding block and a first decoding block; and fusing the intermediate predicted quantity and the auxiliary predicted quantity through a concatate layer, and obtaining a load predicted result as an output result of the main task through a second decoding block by a fused result.
Calculating loss values of the main task and the auxiliary task based on the loss function respectively: calculating a Loss value Loss of the auxiliary predicted value and the actual load value 2 Calculating Loss value Loss of load prediction result and load actual value 1 . And carrying out weighted calculation on the Loss values of the main task and the auxiliary task to obtain a final Loss value Loss. And updating the prediction model parameters by a gradient descent method based on the final Loss value Loss, so as to improve the generalization capability of the model. Specifically, the loss weight of the main task is set to 1, and the loss weight of the auxiliary task is set to 0.3, as shown in the following formula:
Loss=1·Loss 1 +0.3·Loss 2
wherein, loss 1 And Loss of 2 Representing the loss of the primary and secondary tasks, y i Andrepresenting the actual and predicted load values, respectively.
In one embodiment, an unsupervised outlier detection method based on the basic idea of support vector data description (Support Vector Data Description, SVDD) is selected, using hyperspheres for partitioning, where it is desirable to minimize the hypersphere volume, thereby minimizing the effect of outlier data. After using Lagrangian pair-coupling, if the distance of the new data point to the center point is less than or equal to the radius, then it is not an outlier; if it is larger than the radius, outside the hypersphere, it is considered an outlier. For holidays, peak hours, etc., when the power load is different from usual, it should not be regarded as an abnormal value.
In one embodiment, in the aspect of data normalization, data is normalized to eliminate the influence of dimensionality among data features, model training is accelerated, model prediction accuracy is improved, and 0-1 data normalization is adopted, so that each input feature is on a similar scale, and global optimization is found through an Adam algorithm when a prediction technology is applied:
wherein X is a sample value, xmin is a minimum value in the sample, xmax is a maximum value in the sample, and Xnorm is a normalized value.
It should be noted that, the storage medium and the electronic device provided above are further used for implementing the method steps corresponding to the embodiments in the multi-factor fusion medium-long term power load multi-task learning prediction method shown in fig. 1, and are not repeated herein.
It should be noted that, in various embodiments of the present invention, each functional unit/network may be integrated in one processing unit/network, or each unit/network may exist alone physically, or two or more units/networks may be integrated in one unit/network. The integrated units/networks described above may be implemented either in hardware or in software functional units/networks.
From the description of the embodiments above, it will be apparent to those skilled in the art that the embodiments described herein may be implemented in hardware, software, firmware, middleware, code, or any suitable combination thereof. For a hardware implementation, the processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the flow of an embodiment may be accomplished by a computer program to instruct the associated hardware. When implemented, the above-described programs may be stored in or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. The computer readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, those skilled in the art should understand that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. The multi-factor fusion medium-and-long-term power load multi-task learning prediction method is characterized by comprising the following steps of:
acquiring historical load data and load influence data as input data of a prediction model; the prediction model comprises a main network and an auxiliary network; the main network comprises a first encoding block, a first decoding block and a second decoding block; the auxiliary network comprises a feature extraction block, an auxiliary encoding block and an auxiliary decoding block;
inputting historical load data to a main network, and performing convolution operation and maximum pooling operation on the historical load data by a first coding block to obtain a first depth characteristic; the first decoding block performs time convolution operation on the first depth feature to obtain a first decoding result, and performs nonlinear transformation on the first decoding result to obtain an intermediate predicted value;
inputting historical load data and load influence data to an auxiliary network, and performing graph convolution on the historical load data and the load influence data by a feature extraction block to obtain spatial features; the auxiliary coding block carries out convolution operation and maximum pooling operation on the space characteristics to obtain second depth characteristics; the auxiliary decoding block performs time convolution operation on the second depth feature to obtain a second decoding result, and performs nonlinear transformation on the second decoding result to obtain auxiliary predicted quantity;
and fusing the intermediate predicted value and the auxiliary predicted value, inputting the fused result into a second decoding block, performing time convolution operation on the fused result by the second decoding block to obtain a third decoding result, and performing nonlinear transformation on the third decoding result to obtain a load predicted result.
2. The multi-factor fusion medium-long term power load multi-task learning prediction method of claim 1, wherein the feature extraction block comprises two graph roll layers expressed as:
f(X t ,A)=σ(ARelu(AX t W (0) )W (1) )
wherein, relu (·) and sigma (·) are respectively the activation functions of the first layer graph convolution layer and the second layer graph convolution layer; w (W) (0) And W is (1) The weight matrixes to be trained are respectively a first layer of graph convolution layer and a second layer of graph convolution layer; x is X t Is a graph node; a is an adjacency matrix.
3. The multi-factor fusion medium-long term power load multi-task learning prediction method according to claim 1, wherein the first coding block comprises a one-dimensional convolution layer and a maximum pooling layer which are connected in sequence; the first decoding block comprises a time convolution network and a dense connection layer which are connected in sequence.
4. The multi-factor fusion medium-long term power load multi-task learning prediction method according to claim 1, wherein the auxiliary coding block comprises two one-dimensional convolution layers and a maximum pooling layer which are connected in sequence; the auxiliary decoding block comprises two time convolution networks, a Drop layer and a dense connection layer which are connected in sequence.
5. The multi-factor fusion medium-long term power load multi-task learning prediction method according to claim 1, wherein the intermediate prediction amount and the auxiliary prediction amount are fused through a concatate layer.
6. The multi-factor fusion medium-long term power load multi-task learning prediction method according to claim 1, wherein the second decoding block comprises two time convolution networks, a Drop layer and a dense connection layer which are connected in sequence.
7. The multi-factor fusion medium-long term power load multi-task learning prediction method according to claim 1, wherein a main task training prediction model and an auxiliary task training prediction model are respectively set; taking the auxiliary predicted quantity as an auxiliary task output result, and taking the load predicted result as a main task output result;
the losses of the primary and secondary tasks are weighted and summed as shown in the following equation:
Loss=1·Loss 1 +0.3·Loss 2
wherein, loss 1 And Loss of 2 Representing the loss of the primary and secondary tasks, y i Andrepresenting the actual and predicted load values, respectively.
8. An electronic device, comprising:
a memory for storing executable instructions;
a processor for executing executable instructions stored in said memory, implementing the steps of any one of claims 1 to 7.
9. A storage medium storing one or more programs executable by one or more processors to implement the steps of any of claims 1-7.
CN202311659104.9A 2023-12-05 2023-12-05 Multi-factor fusion medium-and-long-term power load multi-task learning prediction method Pending CN117709522A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311659104.9A CN117709522A (en) 2023-12-05 2023-12-05 Multi-factor fusion medium-and-long-term power load multi-task learning prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311659104.9A CN117709522A (en) 2023-12-05 2023-12-05 Multi-factor fusion medium-and-long-term power load multi-task learning prediction method

Publications (1)

Publication Number Publication Date
CN117709522A true CN117709522A (en) 2024-03-15

Family

ID=90156175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311659104.9A Pending CN117709522A (en) 2023-12-05 2023-12-05 Multi-factor fusion medium-and-long-term power load multi-task learning prediction method

Country Status (1)

Country Link
CN (1) CN117709522A (en)

Similar Documents

Publication Publication Date Title
CN109102126B (en) Theoretical line loss rate prediction model based on deep migration learning
CN108108854B (en) Urban road network link prediction method, system and storage medium
CN111027772B (en) Multi-factor short-term load prediction method based on PCA-DBILSTM
CN113905391B (en) Integrated learning network traffic prediction method, system, equipment, terminal and medium
Fang et al. A new sequential image prediction method based on LSTM and DCGAN
CN111861013B (en) Power load prediction method and device
CN116524419B (en) Video prediction method and system based on space-time decoupling and self-attention difference LSTM
CN112330052A (en) Distribution transformer load prediction method
CN114817773A (en) Time sequence prediction system and method based on multi-stage decomposition and fusion
CN113822419A (en) Self-supervision graph representation learning operation method based on structural information
Tang et al. Spatio-temporal latent graph structure learning for traffic forecasting
CN112035689A (en) Zero sample image hash retrieval method based on vision-to-semantic network
CN116384576A (en) Wind speed prediction method, device, system and storage medium
CN113780679B (en) Load prediction method and device based on ubiquitous power Internet of things
CN115115113A (en) Equipment fault prediction method and system based on graph attention network relation embedding
CN117096875B (en) Short-term load prediction method and system based on spatial-Temporal Transformer model
CN117709522A (en) Multi-factor fusion medium-and-long-term power load multi-task learning prediction method
CN116522232A (en) Document classification method, device, equipment and storage medium
CN113886607B (en) Hash retrieval method, device, terminal and storage medium based on graph neural network
CN118014041B (en) Training method and device for power equipment energy consumption prediction model
Wei et al. Compression and storage algorithm of key information of communication data based on backpropagation neural network
CN116089731B (en) Online hash retrieval method and system for relieving catastrophic forgetting
CN115577857B (en) Method and device for predicting output data of energy system and computer equipment
CN118036663A (en) Depth map clustering network based on contrast learning
CN118014041A (en) Training method and device for power equipment energy consumption prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination