CN115080795A - Multi-charging-station cooperative load prediction method and device - Google Patents

Multi-charging-station cooperative load prediction method and device Download PDF

Info

Publication number
CN115080795A
CN115080795A CN202210687630.5A CN202210687630A CN115080795A CN 115080795 A CN115080795 A CN 115080795A CN 202210687630 A CN202210687630 A CN 202210687630A CN 115080795 A CN115080795 A CN 115080795A
Authority
CN
China
Prior art keywords
space
time
graph
charging
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210687630.5A
Other languages
Chinese (zh)
Inventor
周翊民
罗清松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202210687630.5A priority Critical patent/CN115080795A/en
Publication of CN115080795A publication Critical patent/CN115080795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Computational Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Development Economics (AREA)
  • Evolutionary Computation (AREA)
  • Game Theory and Decision Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Artificial Intelligence (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Algebra (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of power load prediction, in particular to a multi-charging-station cooperative load prediction method and device. The method and the device firstly excavate implicit correlation among load space-time information of multiple charging stations, generate the load space-time information of the multiple charging stations into a multiple charging station schema structure, convert discrete load sequence data into graph data, carry out cooperative load prediction of the multiple charging stations, and solve the defect that a traditional load prediction model can only predict load change of a single charging station. And secondly, extracting load space-time information of the multiple charging stations and space-time characteristics in the multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time characteristics, so that more accurate and comprehensive space-time characteristics are extracted. And finally, constructing a joint loss function, and performing joint training on the construction of the multi-charging-station graph structure and the air-space feature extraction process to complete end-to-end load prediction and improve the model training and prediction efficiency.

Description

Multi-charging-station cooperative load prediction method and device
Technical Field
The invention relates to the field of power load prediction, in particular to a multi-charging-station cooperative load prediction method and device.
Background
In recent years, with the opening of the power market and the deepening of the energy internet, a large number of distributed energy devices participate in the operation of the power market. The electric automobile as one of the special loads has random uncertainty of time, space and behavior, and the large-scale electric automobile is connected to the power grid in a non-sequence mode, so that the power grid load is increased sharply, the peak-valley difference is increased, and even the power grid is paralyzed, which brings new challenges to the safe operation of the power grid. The charging station is used as a main place for the electric vehicle to be intensively connected into a power grid, and power grid risks can be effectively avoided by predicting load changes of the charging station.
The charging station load prediction is to predict the future load change situation through historical data so as to quantitatively analyze the influence of the electric vehicle network access on the power grid. Charging station load prediction is essentially a time-sequential prediction problem, i.e., using load data over a past period of time to predict future load changes. The load prediction value of a single charging station is only related to the historical load change of the single charging station, and a plurality of charging stations are distributed in different geographic spaces of a city and have different spatial characteristics. The cooperative load prediction is carried out on a plurality of charging stations, and can be regarded as a space-time prediction problem, and the load prediction value of each charging station is not only related to the history information of the charging station, but also related to the load history information of other charging stations. Common methods for processing charging station load prediction include time sequence modeling, regression analysis, fuzzy prediction, and deep learning methods. Wherein:
(1) time sequence modeling: the time sequence modeling is to establish a mathematical model describing the change of the power load along with time according to the historical data of the load, establish an expression of load prediction on the basis of the model, and predict the future load.
(2) Regression analysis: the regression analysis prediction method is to find the correlation between independent variable and dependent variable and the regression equation thereof according to the change rule of historical data and the factors influencing the load change, determine the model parameters and deduce the load value at the future moment according to the model parameters.
(3) Fuzzy prediction: the fuzzy prediction method is a load prediction technology based on fuzzy mathematics theory, and the concept of the fuzzy mathematics can describe some fuzzy phenomena in the power system, such as key factors in load prediction: the evaluation of weather conditions, the division of the date type of the load, etc., applying fuzzy methods to load prediction can better handle the uncertainty of load changes. At present, the fuzzy theory is mainly applied to load prediction by the following methods: fuzzy clustering method, fuzzy similarity priority method and fuzzy maximum closeness method.
(4) Deep learning: the deep learning method selects a load in a past period as a training sample, constructs a proper network structure, trains the network by using a certain training algorithm, and takes the neural network as a load prediction model after the network meets the precision requirement. The methods can be classified into RNN, LSTM, GRU, MLP, etc. according to different network structures.
By now, in the field of charging station load prediction, the defects commonly existing in the prior art mainly include:
(1) more traditional mathematical statistics methods such as time sequence modeling, regression analysis, fuzzy prediction and the like are modeling and parameter determination by experience, different influence factors need to be considered, and the conditions of complex and inaccurate modeling exist. Meanwhile, a large amount of data of different types are needed for model verification, and the generalization performance is poor under different scenes.
(2) Compared with the traditional mathematical statistics method, the method based on deep learning can automatically mine the time sequence characteristics in the load change sequence, and is simple in modeling and good in effect. However, the existing deep learning model only considers the load prediction of a single charging station, and cannot perform collaborative prediction of a plurality of charging stations, so that a network model needs to be trained independently for different charging stations, the calculation load pressure is high, and the expandability is poor. And implicit association exists among a plurality of charging stations, the existing model only considers the time sequence characteristics of load change of a single charging station, ignores the spatial randomness characteristics of load distribution of the plurality of charging stations, and extracts the load change characteristics which are not accurate and comprehensive enough to influence the prediction accuracy. Therefore, further research needs to be carried out on the load space-time characteristic mining of multiple charging stations in the city.
Disclosure of Invention
The embodiment of the invention provides a multi-charging-station cooperative load prediction method and device, which are used for at least solving the technical problem that the load change characteristic extraction in the prior art is not accurate and comprehensive.
According to an embodiment of the present invention, a method for predicting a cooperative load of multiple charging stations is provided, including the following steps:
mining implicit associations in the load space-time information of the multiple charging stations and generating the load space-time information of the multiple charging stations into a multiple charging station schema structure;
extracting load space-time information of multiple charging stations and space-time characteristics in a multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time characteristics;
and constructing a joint loss function, and performing joint training on the construction of the multi-charging-station schema structure and the time-space feature extraction process.
Further, the method further comprises:
and constructing a multi-charging-station graph structure after the joint training and performing multi-charging-station collaborative load prediction on the load spatio-temporal information of the multiple charging stations by using a spatio-temporal feature extraction process.
Further, before mining implicit associations among load spatio-temporal information of multiple charging stations and generating the load spatio-temporal information of the multiple charging stations into a multiple charging station schema structure, the method further comprises:
preprocessing load space-time information of a plurality of charging stations, aggregating data in each preset time period, wherein the aggregated data represent the total charging capacity of the charging stations in the time period;
dividing the data into three groups according to adjacent time periods, day time intervals and week time intervals, filling lost data by adopting linear interpolation, and then normalizing the data by utilizing a MinMax method.
Further, the method specifically comprises:
the three groups of data are respectively input into a graph structure learning module, the graph structure learning module calculates the similarity among charging stations based on the load historical change data of each charging station through a similarity measurement function, generates a multi-charging-station graph structure with the charging stations as nodes and the similarity as sides, and inputs the graph structure information into three groups of space-time feature extraction modules;
inputting the three groups of data and the learned graph structure information into three groups of space-time feature extraction modules, wherein the space-time feature extraction modules respectively extract space-time features by utilizing a graph convolution neural network and an expansion convolution network, then inputting the learned features into a graph structure learning module for iterative graph structure learning, performing feature fusion on the three groups of space-time features, and inputting the three groups of space-time features into a full-connection layer prediction final result;
and designing a loss function for the graph structure learning module to control the sparsity and the connectivity of the graph structure obtained by learning, designing the loss function for the time and space feature extraction module to reduce the difference between a prediction result and label data, and carrying out weighted summation on the two loss functions to construct a joint loss function so as to realize the joint training of the graph structure learning module and the time and space feature extraction module.
Further, on the data input layer, three groups of data, namely X, of a plurality of charging stations at time intervals of a period of time and a time interval of days and a time interval of weeks are simultaneously input P ,X D ,X W ∈R M×N×D M is the sequence length, N is the number of charging station nodes, and D is the input dimension;
in the graph structure learning module, an implicit graph structure among the nodes of the multiple charging stations is generated according to data input, and different data are input X P ,X D ,X W Generating different structures G P ,G D ,G W
Iteratively updating the generated graph structure in the process of extracting the space-time characteristics, wherein a space-time characteristic extraction module is divided into three sub-modules ST-P, ST-D, ST-W, the sub-modules have the same structure, and input data are X respectively P ,X D ,X W Respectively extracting time proximity characteristic, period characteristic and trend characteristic of load change;
and finally, fusing three different space-time characteristics, and outputting a prediction result by using a full-connection layer.
Further, in the graph structure learning module, for learning the hidden association between the charging station nodes, a graph similarity measurement learning method is adopted, and cosine similarity is designed as a measurement function:
s ij =cos(w⊙x i ,w⊙x j )
wherein [ ] denotes the Hadamard product, w denotes the learnable parameter, x i ,x j Inputting data; expanding cosine similarity to a multi-head version by using m weight vectors, respectively calculating independent similarity matrixes and taking an average value as a final similarity:
Figure BDA0003700253750000051
Figure BDA0003700253750000052
wherein
Figure BDA0003700253750000053
Computing an input vector x i And x j Each similarity considered as part of the vector semantic features;
given the symmetric weight adjacency matrix A of the undirected graph, Dirichlet energy is used to measure the graph signal
Figure BDA0003700253750000054
Smoothness of (2):
Figure BDA0003700253750000055
where tr (·) represents the trace of the matrix, L is the graph laplace matrix, and D ═ Σ j A ij Is a degree matrix; neighboring nodes have similar characteristics by minimizing Ω (a, X); adding sparse constraints to the adjacency matrix:
Figure BDA0003700253750000056
wherein | · | purple F Is Frobenius norm, beta and gamma are non-negative hyperparameters; the first term loss in the above equation is used for punishing the formation of the unconnected graph, and the second term loss is used for punishing the node degree to control the sparsity of the graph; the smoothness loss and the sparse constraint loss are added to obtain the overall regularization loss of the graph:
L g =αΩ(A,X)+f(A)
alpha is a non-negative hyperparameter; overall regularization loses smoothness, connectivity, and sparsity of the control map.
Further, in the space-time feature extraction module, each ST module comprises two ST sub-blocks, each ST sub-block comprises two layers of gate control expansion convolution and one layer of gate control graph convolution, the gate control expansion convolution network is used for extracting the time feature of the load sequence, and the gate control graph convolution is used for extracting the space feature of the multiple charging stations; the gated graph convolution layer is a bridge connecting an upper expansion convolution layer and a lower expansion convolution layer, and the rapid propagation of a space state on the graph convolution layer is realized after expansion convolution; g is graph structure information obtained by learning, each ST sub-block is used as graph convolution prior, and F is space-time characteristics obtained by extraction and is transmitted back to the IDGL graph structure learning module for iterative learning of the graph structure information;
the inflation convolution jumps a certain distance at each step, given a one-dimensional sequence input X ∈ R T And a filter f ∈ R K The dilation convolution at time t is expressed as:
Figure BDA0003700253750000061
where d is the expansion factor, determining the jump distance for each convolution;
and (3) adopting gated expansion convolution in the ST subblock to extract the time sequence dynamic characteristics of load change:
H'=Dil_Conv f (X l )=f*H l
wherein
Figure BDA0003700253750000062
For the l-layer input, Dil _ Conv is the dilated convolution operation,
Figure BDA0003700253750000063
in the form of a convolution kernel, the kernel is,
Figure BDA0003700253750000064
for output, N is the number of charging station nodes, M is the length of the load change sequence, K is the size of the convolution kernel, C i ,C o The number of input channels and the number of output channels are respectively; h' was split evenly, using gating Units (GLU, Gated Linear Units) to increase the non-linearity:
(H' 1 ,H' 2 )=split(H')
Figure BDA00037002537500000613
wherein split means the splitting operation, where split means,
Figure BDA0003700253750000065
in order to be the gate-controlled input,
Figure BDA0003700253750000066
for output, tanh and sigmoid are activation functions; at the gated graph convolution layer, a first order approximation GCN is embedded into the temporal gating cell:
Figure BDA0003700253750000067
(H' 1 ,H' 2 )=split(θ*gH l )
Figure BDA00037002537500000612
where g is the graph convolution operation,
Figure BDA0003700253750000068
is a parameter that can be learned by the user,
Figure BDA0003700253750000069
is a priori information derived from the graph structure information G:
Figure BDA00037002537500000610
Figure BDA00037002537500000611
A∈R N×N for the adjacency matrix of graph G, D ∈ R N×N Is a degree matrix; in the gated attention expansion convolution, a self-attention mechanism was introduced:
Figure BDA0003700253750000071
Figure BDA0003700253750000072
H l+1 =Dil_Conv f (Att(Q,K,V))=f*Att(Q,K,V)
wherein Q, K and V are respectively query (query), key (key) and value (value) matrix, and w Q ,w K ,w V For learnable parameters, Att is the attention value calculation function, H l+1 Is output in l layers.
Further, the space-time characteristics respectively extracted by the ST-P, ST-D, ST-W modules
Figure BDA0003700253750000073
Is spliced into
Figure BDA0003700253750000074
Inputting the time proximity characteristic, the periodic characteristic and the trend characteristic which simultaneously contain the load change into a full connection layer to calculate a final prediction result:
Figure BDA0003700253750000075
wherein W p And b p Is a learnable parameter; the mean absolute error MAE is used as the predicted loss function:
Figure BDA0003700253750000076
defining a joint loss function of graph structure learning and prediction:
Figure BDA0003700253750000077
wherein λ is a parameter used to balance the influence of the graph structure learning module and the spatio-temporal feature extraction module.
According to an embodiment of the present invention, there is provided a multi-charging-station cooperative load prediction apparatus including:
the graph structure learning module is used for mining implicit associations among the load space-time information of the multiple charging stations and generating the load space-time information of the multiple charging stations into a multiple charging station graph structure;
the time-space feature extraction module is used for extracting load time-space information of the multiple charging stations and time-space features in the multiple charging station graph structure and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted time-space features;
and the joint loss function construction module is used for constructing a joint loss function and performing joint training on the construction of the multi-charging-station schema structure and the air-space feature extraction process.
Further, the apparatus further comprises:
and the multi-charging-station collaborative load prediction module is used for constructing a multi-charging-station schema structure and performing multi-charging-station collaborative load prediction on the load spatio-temporal information of the multi-charging stations by using the spatio-temporal feature extraction process after the joint training.
A storage medium storing a program file that can implement any one of the above-described multi-charging-station cooperative load prediction methods.
A processor is used for running a program, wherein the program executes the multi-charging-station cooperative load prediction method in any one of the above manners.
According to the method and the device for forecasting the multi-charging-station cooperative load, implicit correlation among load space-time information of the multi-charging-station is mined, the load space-time information of the multi-charging-station is generated into a multi-charging-station graph structure, discrete load sequence data are converted into graph data, the cooperative load forecasting of the multi-charging-station is carried out, and the defect that a traditional load forecasting model can only forecast the load change of a single charging station is overcome. And secondly, extracting load space-time information of the multiple charging stations and space-time characteristics in the multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time characteristics, so that more accurate and comprehensive space-time characteristics are extracted. And finally, constructing a joint loss function, performing joint training on the construction of the multi-charging-station graph structure and the air-space feature extraction process, constructing the joint loss function, completing end-to-end load prediction, and improving the model training and prediction efficiency.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a multi-charging station cooperative load prediction method according to the present invention;
FIG. 2 is a model diagram illustrating a multi-charging-station cooperative load prediction model according to the present invention;
FIG. 3 is a diagram of an iterative graph structure learning framework in accordance with the present invention;
FIG. 4 is a block diagram of spatiotemporal feature extraction in the present invention;
FIG. 5 is a diagram of the dilated convolution network of the present invention;
fig. 6 is a block diagram of the multi-charging-station cooperative load prediction apparatus according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, a method for predicting a cooperative load of multiple charging stations is provided, and referring to fig. 1, the method includes the following steps:
s101, mining implicit associations among load space-time information of multiple charging stations and generating a multiple charging station diagram structure by the load space-time information of the multiple charging stations;
s102, extracting load space-time information of multiple charging stations and space-time characteristics in a multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time characteristics;
s103, constructing a joint loss function, and performing joint training on the construction of the multi-charging-station schema structure and the air-space feature extraction process.
According to the multi-charging-station collaborative load prediction method, implicit correlation among load space-time information of multiple charging stations is mined, the load space-time information of the multiple charging stations is generated into a multiple charging station graph structure, discrete load sequence data are converted into graph data, collaborative load prediction of the multiple charging stations is carried out, and the defect that a traditional load prediction model can only predict load change of a single charging station is overcome. And secondly, extracting load space-time information of the multiple charging stations and space-time characteristics in the multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time characteristics, so that more accurate and comprehensive space-time characteristics are extracted. And finally, constructing a joint loss function, performing joint training on the construction of the multi-charging-station graph structure and the air-space feature extraction process, constructing the joint loss function, completing end-to-end load prediction, and improving the model training and prediction efficiency.
Wherein, the method further comprises:
and constructing a multi-charging-station graph structure after the joint training and performing multi-charging-station collaborative load prediction on the load spatio-temporal information of the multiple charging stations by using a spatio-temporal feature extraction process.
Before mining implicit associations among load spatio-temporal information of multiple charging stations and generating the load spatio-temporal information of the multiple charging stations into a multiple charging station schema structure, the method further comprises the following steps:
preprocessing load space-time information of a plurality of charging stations, aggregating data in each preset time period, wherein the aggregated data represent the total charging capacity of the charging stations in the time period;
dividing the data into three groups according to adjacent time periods, day time intervals and week time intervals, filling lost data by adopting linear interpolation, and then normalizing the data by utilizing a MinMax method.
The method specifically comprises the following steps:
the three groups of data are respectively input into a graph structure learning module 201, the graph structure learning module 201 calculates the similarity among charging stations based on the load historical change data of each charging station through a similarity measurement function, generates a multi-charging-station graph structure with the charging stations as nodes and the similarity as sides, and inputs the graph structure information into a three-group spatio-temporal feature extraction module 202;
inputting three groups of data and the learned graph structure information into three groups of spatiotemporal feature extraction modules 202, wherein the spatiotemporal feature extraction modules 202 respectively extract spatiotemporal features by using a graph convolution neural network and an expansion convolution network, then inputting the learned features into a graph structure learning module 201 for iterative graph structure learning, performing feature fusion on the three groups of spatiotemporal features, and inputting the three groups of spatiotemporal features into a full-connection layer prediction final result;
the graph structure learning module 201 is designed with loss functions to control the sparsity and the connectivity of the graph structure obtained by learning, the time and space feature extraction module 202 is designed with loss functions to reduce the difference between the prediction result and the label data, the two loss functions are weighted and summed to construct a combined loss function, and the combined training of the graph structure learning module 201 and the time and space feature extraction module 202 is realized.
Wherein, on the data input layer, three groups of data of a plurality of charging stations at a period of time, a day interval and a week interval are simultaneously input, namely X P ,X D ,X W ∈R M×N×D M is the sequence length, N is the number of charging station nodes, and D is the input dimension;
in the graph structure learning module 201, an implicit graph structure among the nodes of the multiple charging stations is generated according to data input, and different data inputs X P ,X D ,X W Generating different structures G P ,G D ,G W
The generated graph structure is iteratively updated in the process of extracting the space-time characteristics, the space-time characteristic extraction module 202 is divided into three sub-modules ST-P, ST-D, ST-W, the sub-modules have the same structure, and input data are X respectively P ,X D ,X W Respectively extracting time proximity characteristic, period characteristic and trend characteristic of load change;
and finally, fusing three different space-time characteristics, and outputting a prediction result by using a full-connection layer.
In the graph structure learning module 201, for learning the hidden association between the charging station nodes, a graph similarity measurement learning method is adopted, and cosine similarity is designed as a measurement function:
s ij =cos(w⊙x i ,w⊙x j )
wherein [ ] denotes the Hadamard product, w denotes the learnable parameter, x i ,x j Inputting data; expanding cosine similarity to a multi-head version by using m weight vectors, respectively calculating independent similarity matrixes and taking an average value as a final similarity:
Figure BDA0003700253750000121
Figure BDA0003700253750000122
wherein
Figure BDA0003700253750000123
Computing an input vector x i And x j Each similarity considered as part of the vector semantic features;
given the symmetric weight adjacency matrix A of the undirected graph, Dirichlet energy is used to measure the graph signal
Figure BDA0003700253750000124
Smoothness of (2):
Figure BDA0003700253750000125
where tr (·) represents the trace of the matrix, L is the graph laplace matrix, and D ═ Σ j A ij Is a degree matrix; neighboring nodes have similar characteristics by minimizing Ω (a, X); adding sparse constraints to the adjacency matrix:
Figure BDA0003700253750000126
wherein | · | purple F Is Frobenius norm, beta and gamma are non-negative hyperparameters; the first term loss in the above equation is used for punishing the formation of the unconnected graph, and the second term loss is used for punishing the node degree to control the sparsity of the graph; the smoothness loss and the sparse constraint loss are added to obtain the overall regularization loss of the graph:
L g =αΩ(A,X)+f(A)
alpha is a non-negative hyperparameter; overall regularization loses smoothness, connectivity, and sparsity of the control map.
In the space-time feature extraction module 202, each ST module comprises two ST sub-blocks, each ST sub-block comprises two layers of gated expansion convolution and one layer of gated graph convolution, the gated expansion convolution network is used for extracting the time features of the load sequence, and the gated graph convolution is used for extracting the space features of multiple charging stations; the gate control graph convolution layer is a bridge connecting an upper expansion convolution layer and a lower expansion convolution layer, and the rapid propagation of a space state on the graph convolution layer is realized after expansion convolution; g is graph structure information obtained by learning, each ST sub-block is used as graph convolution prior, and F is space-time characteristics obtained by extraction, and the space-time characteristics are returned to the IDGL graph structure learning module 201 for iterative learning of the graph structure information;
the inflation convolution jumps a certain distance at each step, given a one-dimensional sequence input X ∈ R T And a filter f ∈ R K The dilation convolution at time t is expressed as:
Figure BDA0003700253750000131
where d is the expansion factor, determining the jump distance for each convolution;
gated expansion convolution is adopted in the ST sub-block to extract the time sequence dynamic characteristics of load change:
H'=Dil_Conv f (X l )=f*H l
wherein
Figure BDA0003700253750000132
Is a layer ofInput, Dil _ Conv is the dilation convolution operation,
Figure BDA0003700253750000133
in the form of a convolution kernel, the kernel is,
Figure BDA0003700253750000134
for output, N is the number of charging station nodes, M is the length of the load change sequence, K is the size of the convolution kernel, C i ,C o The number of input channels and the number of output channels are respectively; h' was split evenly, using gating Units (GLU, Gated Linear Units) to increase the non-linearity:
(H' 1 ,H' 2 )=split(H')
Figure BDA0003700253750000135
wherein split represents the splitting operation and the splitting operation,
Figure BDA0003700253750000136
in order to be the gate-controlled input,
Figure BDA0003700253750000137
for output, tanh and sigmoid are activation functions; at the gated graph convolution layer, a first order approximation GCN is embedded into the temporal gating cell:
Figure BDA0003700253750000138
(H' 1 ,H' 2 )=split(θ*gH l )
Figure BDA0003700253750000139
where g is the graph convolution operation,
Figure BDA00037002537500001310
is a parameter that can be learned by the user,
Figure BDA00037002537500001311
is a priori information derived from the graph structure information G:
Figure BDA00037002537500001312
Figure BDA0003700253750000141
A∈R N×N for the adjacency matrix of graph G, D ∈ R N×N Is a degree matrix; in the gated attention expansion convolution, a self-attention mechanism was introduced:
Figure BDA0003700253750000142
Figure BDA0003700253750000143
H l+1 =Dil_Conv f (Att(Q,K,V))=f*Att(Q,K,V)
wherein Q, K and V are respectively query (query), key (key) and value (value) matrixes, and w Q ,w K ,w V For learnable parameters, Att is the attention value calculation function, H l+1 Is output in l layers.
Wherein, the space-time characteristics extracted by the ST-P, ST-D, ST-W modules are respectively
Figure BDA0003700253750000144
Is spliced into
Figure BDA0003700253750000145
Inputting the time proximity characteristic, the periodic characteristic and the trend characteristic which simultaneously contain the load change into a full connection layer to calculate a final prediction result:
Figure BDA0003700253750000146
wherein W p And b p Is a learnable parameter; the mean absolute error MAE is used as the predicted loss function:
Figure BDA0003700253750000147
defining a joint loss function of graph structure learning and prediction:
Figure BDA0003700253750000148
where λ is a parameter used to balance the effects of the graph structure learning module 201 and the spatio-temporal feature extraction module 202.
The multi-charging-station cooperative load prediction method of the present invention is described in detail below with specific embodiments and with reference to fig. 2 to 5:
in order to solve the technical problems in the prior art, the invention designs a multi-charging-station cooperative load prediction method, which is used for mining implicit association among multiple charging stations, further extracting the time-space characteristics of the load change of the multiple charging stations by using load historical data and predicting the future load change. In the technical scheme of the invention, a graph structure learning module 201 for multi-charging-station cooperative load prediction based on a graph convolution neural network is provided, the graph structure learning module 201 performs iterative graph structure learning according to load space-time information of multiple charging stations, implicit association among the multiple charging stations is mined and a graph structure is generated, and discrete load sequence data is converted into graph data, so that the graph convolution neural network can be used for performing cooperative load prediction of the multiple charging stations, and the defect that a traditional load prediction model can only predict load change of a single charging station is overcome. The invention further designs a time-space feature extraction module 202 based on the graph convolution neural network and the time sequence convolution (expansion convolution) network, which learns the space features among multiple charging stations through the graph convolution neural network and learns the time features of load change by using the time sequence convolution network, thereby extracting more accurate and comprehensive time-space features. In order to realize the joint training of the graph structure learning module 201 and the time-space feature extraction module 202, a joint loss function is constructed, end-to-end load prediction is completed, and the model training and prediction efficiency is improved.
The processing flow of the invention is shown in figure 2:
first, data preprocessing is performed to remove data during weekends and holidays, so as to ensure the universality of the model. And aggregating the data in each period of time, wherein the aggregated data records represent the total charging capacity of the charging station in the period of time. Data were divided into three groups by contiguous time period, by day interval, and by week interval. And filling lost data by adopting a linear interpolation value, and then normalizing the data by utilizing a MinMax method.
Secondly, the graph structure learning module 201 calculates the similarity between the charging stations based on the load history change data of each charging station through a similarity measurement function, and further generates a multi-charging-station graph structure with the charging stations as nodes and the similarity as sides, and inputs the graph structure information into the three groups of spatio-temporal feature extraction modules 202.
Thirdly, the spatiotemporal feature extraction, inputting the three groups of data and the learned graph structure information into three groups of spatiotemporal feature extraction modules 202, which respectively extract spatiotemporal features by using a graph convolution neural network and an expansion convolution network, and then inputting the learned feature representation into a graph structure learning module 201 for iterative graph structure learning. And performing feature fusion on the three groups of space-time features, and inputting the feature fusion into a final prediction result of the full-link layer.
Fourthly, model joint training, namely, for the graph structure learning module 201, designing a loss function to control the sparsity and connectivity of the graph structure obtained by learning. For the spatio-temporal feature extraction module 202, a loss function is designed to reduce the difference between the prediction result and the tag data. And performing weighted summation on the two loss functions to construct a joint loss function, thereby completing joint training of the graph structure learning module 201 and the time-space feature extraction module 202.
The technical embodiments of the present invention will be described in detail below, and it should be noted that the specific technical embodiments herein do not limit the scope of the present invention.
The general framework of the present invention is shown in fig. 2, and mainly includes a graph structure learning module 201 and a null feature extraction module 202. On the data input layer, three groups of data of a plurality of charging stations at a time interval of a period of days and a week interval are simultaneously input, namely X P ,X D ,X W ∈R M×N×D M is the sequence length, N is the number of charging station nodes, and D is the input dimension. In the graph structure learning module 201, an implicit graph structure among the nodes of the multiple charging stations is generated according to data input, and different data inputs X P ,X D ,X W Generating different structures G P ,G D ,G W And iteratively updating the generated graph structure in the process of extracting the space-time characteristics. In the space-time feature extraction module 202, the space-time feature extraction module is divided into three submodules ST-P, ST-D, ST-W, the submodules have the same structure, but the input data are X respectively P ,X D ,X W And respectively extracting time proximity characteristic, periodic characteristic and trend characteristic of the load change. And finally, fusing three different space-time characteristics, and outputting a prediction result by using a full-connection layer.
1. Graph structure learning module 201
In the graph structure learning module 201, the present invention learns graph structures and graph convolution neural network (GCN) parameters simultaneously in an iterative manner. The key principle of the iterative graph structure learning framework is to use more accurate graph node feature representation to learn a better graph structure, and simultaneously, extract better node features based on the learned graph structure, as shown in fig. 3. Unlike methods that generate graphs based on raw node features, node features learned by GCN can provide useful information for graph structure learning. On the other hand, the newly learned graph structure information may provide better graph input for GCN parameter learning.
In order to learn the hidden association between the charging station nodes, a graph similarity measurement learning method is adopted, and cosine similarity is designed as a measurement function:
s ij =cos(w⊙x i ,w⊙x j )
wherein [ ] denotes the Hadamard product, w denotes the learnable parameter, x i ,x j To input data. In order to improve the expression capacity, the cosine similarity is expanded to a multi-head version by using m weight vectors, independent similarity matrixes are respectively calculated, and the average value is taken as the final similarity:
Figure BDA0003700253750000171
Figure BDA0003700253750000172
wherein
Figure BDA0003700253750000173
Computing an input vector x i And x j Each similarity may be considered as part of the vector semantic features.
In order to ensure the quality of the learned graph structure, the smoothness, the connectivity and the sparsity of the graph are controlled by a graph regularization method. Smoothness means that the values of the graph signals change smoothly between adjacent nodes, and given a symmetric weighted adjacent matrix A of an undirected graph, the graph signals are weighted by Dirichlet energy
Figure BDA0003700253750000174
Smoothness of (2):
Figure BDA0003700253750000175
where tr (·) represents the trace of the matrix, L is the graph laplace matrix, and D ═ Σ j A ij Is a degree matrix. By minimizing Ω (a, X), the neighboring nodes can have similar characteristics, thereby ensuring smoothness of the graph signals. Controlling the sparsity of the graph, and adding a sparsity constraint to the adjacency matrix:
Figure BDA0003700253750000176
wherein | · | purple F Is Frobenius norm, beta and gamma are non-negative hyperparameters. The first term loss in the above equation is used for punishing the formation of the unconnected graph, and the second term loss is used for punishing the node degree to control the sparsity of the graph. The smoothness penalty and sparse constraint penalty add to obtain the overall regularization penalty of the graph:
L g =αΩ(A,X)+f(A)
alpha is a non-negative hyperparameter. The overall regularization loss can control the smoothness, connectivity and sparsity of the graph, thereby ensuring the quality of the learned graph structure.
2. Spatio-temporal feature extraction module 202
The invention designs a spatio-temporal feature extraction module 202 based on a graph convolution neural network and an expansion convolution network, as shown in FIG. 4, ST-P, ST-D, ST-W in FIG. 3 is a graph structure. Each ST module comprises two ST sub-blocks, each ST sub-block adopts a structure similar to a sandwich structure and comprises two layers of gate control expansion convolution and one layer of gate control graph convolution, the gate control expansion convolution network is used for extracting the time characteristics of a load sequence, the gate control graph convolution is used for extracting the space characteristics of multiple charging stations, and the space-time characteristic extraction is realized. The gated graph convolution layer is a bridge connecting an upper expansion convolution layer and a lower expansion convolution layer, and after expansion convolution, rapid propagation of a space state on the graph convolution layer can be realized. The sandwich structure can effectively apply a bottleneck strategy, and scale compression and feature compression are realized by up-down sampling before and after gate control graph convolution, so that the number of model parameters is reduced. G is the graph structure information obtained by learning, and is used as graph convolution prior in each ST sub-block, and F is the space-time feature obtained by extraction, and is returned to the IDGL graph structure learning module 201 for iterative learning of the graph structure information.
Although the recurrent neural networks (RNN, GRU, LSTM, etc.) have achieved good results in time series analysis, there are problems such as time-consuming iteration and slow response to dynamic changes. The expansion convolution network has the advantages of high training speed, simple structure, no dependent constraint among calculation steps and the like, and can expand feelingWild, as shown in FIG. 5. The inflation convolution jumps a certain distance at each step, given a one-dimensional sequence input X ∈ R T And a filter f ∈ R K The dilation convolution at time t may be expressed as:
Figure BDA0003700253750000181
where d is the expansion factor that determines the jump distance for each convolution. By stacking the dilation convolution layers in incremental dilation factors, the field of view of the model increases exponentially, which allows the dilation convolution to extract longer time dependencies with fewer layers and reduces computational complexity.
The invention adopts gated expansion convolution in the ST sub-block to extract the time sequence dynamic characteristics of load change:
H'=Dil_Conv f (X l )=f*H l
wherein
Figure BDA0003700253750000191
For the l-layer input, Dil _ Conv is the dilated convolution operation,
Figure BDA0003700253750000192
in the form of a convolution kernel, the kernel is,
Figure BDA0003700253750000193
for output, N is the number of charging station nodes, M is the length of the load change sequence, K is the size of the convolution kernel, and C i ,C o The number of input and output channels is respectively. H' was split evenly, using gating Units (GLU, Gated Linear Units) to increase the non-linearity:
(H' 1 ,H' 2 )=split(H')
Figure BDA0003700253750000194
wherein split represents the splitting operation and the splitting operation,
Figure BDA0003700253750000195
in order to be the gate-controlled input,
Figure BDA0003700253750000196
for output, tanh and sigmoid are activation functions. In gated graph convolutional layers, to extract spatial and local temporal features simultaneously, a first order approximation GCN is embedded into the temporal gating cell:
Figure BDA00037002537500001914
(H' 1 ,H' 2 )=split(θ*gH l )
Figure BDA0003700253750000197
where g is the graph convolution operation,
Figure BDA0003700253750000198
is a parameter that can be learned by the user,
Figure BDA0003700253750000199
is a priori information derived from the graph structure information G:
Figure BDA00037002537500001910
Figure BDA00037002537500001911
A∈R N×N for the adjacency matrix of graph G, D ∈ R N×N Is a degree matrix. In gated attention-dilation convolutional layers, global temporal features are further extracted, while in order to capture temporal dynamics, i.e. different sequence inputs may have different time dependencies, a self-attention mechanism is introduced:
Figure BDA00037002537500001912
Figure BDA00037002537500001913
H l+1 =Dil_Conv f (Att(Q,K,V))=f*Att(Q,K,V)
wherein Q, K and V are respectively query (query), key (key) and value (value) matrixes, and w Q ,w K ,w V For learnable parameters, Att is the attention value calculation function, H l+1 Is output in l layers.
3. Joint loss function
Spatio-temporal features extracted from the ST-P, ST-D, ST-W modules respectively
Figure BDA0003700253750000201
Is spliced into
Figure BDA0003700253750000202
Figure BDA0003700253750000203
Inputting the time proximity characteristic, the periodic characteristic and the trend characteristic which simultaneously contain the load change into a full connection layer to calculate a final prediction result:
Figure BDA0003700253750000204
wherein W p And b p Are learnable parameters. The Mean Absolute Error (MAE) is used herein as a predicted loss function:
Figure BDA0003700253750000205
since the graph structure learning is performed iteratively in the process of spatio-temporal feature extraction, a joint loss function of graph structure learning and prediction is defined:
Figure BDA0003700253750000206
where λ is a parameter used to balance the effects of the graph structure learning module 201 and the spatio-temporal feature extraction module 202. By optimizing the loss function, the integral training of the model can be realized, and the learning of model parameters is realized while the optimal graph structure is learned.
The key points and points to be protected of the invention mainly comprise the following points:
1. according to the multi-charging-station load prediction technical route, firstly, a graph structure learning is utilized to convert a multi-charging-station load prediction problem into a graph structure data processing problem, and then a graph convolution neural network and a time sequence network are utilized to extract and predict the time-space characteristics of the load.
2. The multi-charging-station collaborative load prediction model comprises an iterative graph structure learning module 201, three space-time feature extraction modules 202 and a feature fusion and prediction module.
3. A method for iteratively mining implicit associations among multiple charging stations is utilized by a graph structure learning module 201.
4. A method for extracting the space-time characteristics of the load changes of multiple charging stations by using a space-time characteristic extraction module 202 composed of a graph convolution neural network and an expansion convolution network.
5. The graph structure designed by the present invention learns the joint loss function of the spatio-temporal feature extraction module 202.
Compared with the existing charging station load prediction method, the method has the advantages that:
1. compared with the traditional mathematical statistics method, the method has the advantages that too many external influence factors are not required to be considered, the modeling process is simple, only a small number of model hyper-parameters are required to be determined in the training process, and the prediction accuracy is higher.
2. Compared with the load prediction method based on deep learning:
i) according to the method, the implicit association among the multiple charging stations is mined by using a graph structure learning method, so that discrete charging station sites are constructed into a graph structure, the load changes of the multiple charging stations can be predicted simultaneously by further using a graph convolution neural network, and the limitation that a deep learning method can only predict the load of a single charging station is broken through;
ii) the time sequence characteristics of the load change of a single charging station are considered, the spatial characteristics of the load change of multiple charging stations are considered, and the extracted space-time characteristics are more comprehensive;
iii) when the model designed by the invention is used for predicting the load of a new charging station, only graph node information needs to be added on the basis of the current model, and a new model needs to be trained in the traditional method, so that the model in the invention has stronger expandability.
A large number of experiments are carried out on the multi-charging-station cooperative load prediction model, and the result proves that the multi-charging-station cooperative load prediction model is feasible and effective.
Example 2
According to an embodiment of the present invention, there is provided a multi-charging-station cooperative load prediction apparatus, referring to fig. 6, including:
the graph structure learning module 201 is used for mining implicit associations in the load space-time information of the multiple charging stations and generating a multiple charging station graph structure from the load space-time information of the multiple charging stations;
the space-time feature extraction module 202 is used for extracting load space-time information of multiple charging stations and space-time features in a multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time features;
and the joint loss function construction module 203 is configured to construct a joint loss function, and perform joint training on the construction of the multi-charging-station schema structure and the air-space feature extraction process.
According to the multi-charging-station cooperative load prediction device in the embodiment of the invention, firstly, implicit correlation among load space-time information of multiple charging stations is mined, the load space-time information of the multiple charging stations is generated into a multiple charging station diagram structure, discrete load sequence data is converted into diagram data, cooperative load prediction of the multiple charging stations is carried out, and the defect that a traditional load prediction model can only predict load change of a single charging station is overcome. And secondly, extracting load space-time information of the multiple charging stations and space-time characteristics in the multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time characteristics, so that more accurate and comprehensive space-time characteristics are extracted. And finally, constructing a joint loss function, performing joint training on the construction of the multi-charging-station graph structure and the air-space feature extraction process, constructing the joint loss function, completing end-to-end load prediction, and improving the model training and prediction efficiency.
Wherein, the device still includes:
and the multi-charging-station collaborative load prediction module is used for constructing a multi-charging-station schema structure and performing multi-charging-station collaborative load prediction on the load spatio-temporal information of the multi-charging stations by using the spatio-temporal feature extraction process after the joint training.
The multi-charging-station cooperative load prediction apparatus of the present invention will be described in detail with reference to fig. 2 to 5 in the following embodiments:
in order to solve the technical problems in the prior art, the invention designs the multi-charging-station cooperative load prediction device, which is used for mining implicit association among the multi-charging-station, further extracting the time-space characteristics of the load change of the multi-charging-station by utilizing load historical data and predicting the future load change. In the technical scheme of the invention, a graph structure learning module 201 for multi-charging-station cooperative load prediction based on a graph convolution neural network is provided, the graph structure learning module 201 performs iterative graph structure learning according to load space-time information of multiple charging stations, implicit association among the multiple charging stations is mined and a graph structure is generated, and discrete load sequence data is converted into graph data, so that the graph convolution neural network can be used for performing cooperative load prediction of the multiple charging stations, and the defect that a traditional load prediction model can only predict load change of a single charging station is overcome. The invention further designs a time-space feature extraction module 202 based on the graph convolution neural network and the time sequence convolution (expansion convolution) network, which learns the space features among multiple charging stations through the graph convolution neural network and learns the time features of load change by using the time sequence convolution network, thereby extracting more accurate and comprehensive time-space features. In order to realize the joint training of the graph structure learning module 201 and the spatio-temporal feature extraction module 202, a joint loss function is constructed, end-to-end load prediction is completed, and the model training and prediction efficiency is improved.
The processing flow of the invention is shown in figure 2:
first, data preprocessing is performed to remove data during weekends and holidays, so as to ensure the universality of the model. And aggregating the data in each period of time, wherein the aggregated data records represent the total charging capacity of the charging station in the period of time. Data were divided into three groups by adjacent time period, by day interval, and by week interval. And filling lost data by adopting a linear interpolation value, and then normalizing the data by utilizing a MinMax method.
Secondly, the graph structure learning module 201 calculates the similarity between the charging stations based on the load history change data of each charging station through a similarity measurement function, and further generates a multi-charging-station graph structure with the charging stations as nodes and the similarity as sides, and inputs the graph structure information into the three groups of spatio-temporal feature extraction modules 202.
Thirdly, the spatiotemporal feature extraction, inputting the three groups of data and the learned graph structure information into three groups of spatiotemporal feature extraction modules 202, which respectively extract spatiotemporal features by using a graph convolution neural network and an expansion convolution network, and then inputting the learned feature representation into a graph structure learning module 201 for iterative graph structure learning. And performing feature fusion on the three groups of space-time features, and inputting the feature fusion into a final prediction result of the full-link layer.
Fourthly, model joint training, namely, for the graph structure learning module 201, designing a loss function to control the sparsity and connectivity of the graph structure obtained by learning. For the spatio-temporal feature extraction module 202, a loss function is designed to reduce the difference between the prediction result and the tag data. And performing weighted summation on the two loss functions to construct a joint loss function, thereby completing joint training of the graph structure learning module 201 and the time-space feature extraction module 202.
The technical embodiments of the present invention will be described in detail below, and it should be noted that the specific technical embodiments herein do not limit the scope of the present invention.
The general framework of the present invention is shown in fig. 2, and mainly includes a graph structure learning module 201 and an empty feature extraction module 202. On the data input layer, three groups of data of a plurality of charging stations at a time interval of a period of days and a week interval are simultaneously input, namely X P ,X D ,X W ∈R M×N×D M is the sequence length, N is the number of charging station nodes, and D is the input dimension. In the graph structure learning module 201, an implicit graph structure among the nodes of the multiple charging stations is generated according to data input, and different data inputs X P ,X D ,X W Generating different structures G P ,G D ,G W And iteratively updating the generated graph structure in the process of extracting the space-time characteristics. In the space-time feature extraction module 202, the space-time feature extraction module is divided into three submodules ST-P, ST-D, ST-W, the submodules have the same structure, but the input data are X respectively P ,X D ,X W And respectively extracting time proximity characteristic, periodic characteristic and trend characteristic of the load change. And finally, fusing three different space-time characteristics, and outputting a prediction result by using a full-connection layer.
1. Graph structure learning module 201
In the graph structure learning module 201, the present invention learns graph structures and graph convolution neural network (GCN) parameters simultaneously in an iterative manner. The key principle of the iterative graph structure learning framework is to use more accurate graph node feature representation to learn a better graph structure, and simultaneously, extract better node features based on the learned graph structure, as shown in fig. 3. Unlike methods that generate graphs based on raw node features, node features learned by GCN can provide useful information for graph structure learning. On the other hand, the newly learned graph structure information may provide better graph input for GCN parameter learning.
In order to learn the hidden association between the charging station nodes, a graph similarity measurement learning method is adopted, and cosine similarity is designed as a measurement function:
s ij =cos(w⊙x i ,w⊙x j )
wherein [ ] denotes the Hadamard product, w denotes the learnable parameter, x i ,x j To input data. In order to improve the expression capacity, the cosine similarity is expanded to a multi-head version by using m weight vectors, independent similarity matrixes are respectively calculated, and the average value is taken as the final similarity:
Figure BDA0003700253750000241
Figure BDA0003700253750000242
wherein
Figure BDA0003700253750000251
Computing an input vector x i And x j Each of the p cosine similarities of (1) may be considered as part of the vector semantic features.
In order to ensure the quality of the learned graph structure, the smoothness, the connectivity and the sparsity of the graph are controlled by a graph regularization method. Smoothness means that the values of the graph signals change smoothly between adjacent nodes, and given a symmetric weighted adjacent matrix A of an undirected graph, the graph signals are weighted by Dirichlet energy
Figure BDA0003700253750000252
Smoothness of (2):
Figure BDA0003700253750000253
where tr (·) represents the trace of the matrix, L is the graph laplace matrix, and D ═ Σ j A ij Is a degree matrix. By minimizing Ω (a, X), the neighboring nodes can have similar characteristics, thereby ensuring smoothness of the graph signals. Controlling the sparsity of the graph, and adding a sparsity constraint to the adjacency matrix:
Figure BDA0003700253750000254
wherein | · | purple F Is Frobenius norm, beta and gamma are non-negative hyperparameters. The first term loss in the above equation is used for punishing the formation of the unconnected graph, and the second term loss is used for punishing the node degree to control the sparsity of the graph. The smoothness penalty and sparse constraint penalty add to obtain the overall regularization penalty of the graph:
L g =αΩ(A,X)+f(A)
alpha is a non-negative hyperparameter. The overall regularization loss can control the smoothness, connectivity and sparsity of the graph, thereby ensuring the quality of the learned graph structure.
2. Spatio-temporal feature extraction module 202
The invention designs a spatio-temporal feature extraction module 202 based on a graph convolution neural network and an expansion convolution network, as shown in FIG. 4, ST-P, ST-D, ST-W in FIG. 3 is a graph structure. Each ST module comprises two ST sub-blocks, each ST sub-block adopts a structure similar to a sandwich structure and comprises two layers of gate control expansion convolution and one layer of gate control graph convolution, the gate control expansion convolution network is used for extracting the time characteristics of a load sequence, the gate control graph convolution is used for extracting the space characteristics of multiple charging stations, and the space-time characteristic extraction is realized. The gated graph convolution layer is a bridge connecting an upper expansion convolution layer and a lower expansion convolution layer, and after expansion convolution, rapid propagation of a space state on the graph convolution layer can be realized. The sandwich structure can effectively apply a bottleneck strategy, and scale compression and feature compression are realized by up-down sampling before and after gate control graph convolution, so that the number of model parameters is reduced. G is the graph structure information obtained by learning, and is used as graph convolution prior in each ST sub-block, and F is the space-time feature obtained by extraction, and is returned to the IDGL graph structure learning module 201 for iterative learning of the graph structure information.
Although the cyclic neural network (RNN, GRU, LSTM, etc.) has achieved good results in time series analysis, it has problems of time-consuming iteration, slow response to dynamic changes, etc. The expansion convolution network has the advantages of high training speed, simple structure, no dependency constraint among calculation steps and the like, and simultaneously can expand the receptive field, as shown in fig. 5. The expansion convolution jumps a certain distance at each step, and given one-dimensional sequence input X belongs to R T And a filter f ∈ R K The dilation convolution at time t may be expressed as:
Figure BDA0003700253750000261
where d is the expansion factor that determines the jump distance for each convolution. By stacking the dilation convolution layers in incremental dilation factors, the field of view of the model increases exponentially, which allows the dilation convolution to extract longer time dependencies with fewer layers and reduces computational complexity.
The invention adopts gated expansion convolution in the ST sub-block to extract the time sequence dynamic characteristics of load change:
H'=Dil_Conv f (X l )=f*H l
wherein
Figure BDA0003700253750000262
For the l-layer input, Dil _ Conv is the dilated convolution operation,
Figure BDA0003700253750000263
in the form of a convolution kernel, the kernel is,
Figure BDA0003700253750000264
for output, N is the number of charging station nodes, M is the length of the load change sequence, K is the size of the convolution kernel, C i ,C o The number of input and output channels is respectively. H' was split evenly, using gating Units (GLU, Gated Linear Units) to increase the non-linearity:
(H' 1 ,H' 2 )=split(H')
Figure BDA0003700253750000265
wherein split means the splitting operation, where split means,
Figure BDA0003700253750000266
in order to be the gate-controlled input,
Figure BDA0003700253750000267
for output, tanh and sigmoid are activation functions. In gated graph convolutional layers, to extract spatial and local temporal features simultaneously, a first order approximation GCN is embedded into the temporal gating cell:
Figure BDA0003700253750000271
(H' 1 ,H' 2 )=split(θ*gH l )
Figure BDA0003700253750000272
where g is the graph convolution operation,
Figure BDA0003700253750000273
is a parameter that can be learned by the user,
Figure BDA0003700253750000274
is a priori information derived from the graph structure information G:
Figure BDA0003700253750000275
Figure BDA0003700253750000276
A∈R N×N for the adjacency matrix of graph G, D ∈ R N×N Is a degree matrix. In gating attention-dilated convolutional layers, global temporal features were further extracted, while in order to capture temporal dynamics, i.e. different sequence inputs may have different time dependencies, a self-attention mechanism was introduced:
Figure BDA0003700253750000277
Figure BDA0003700253750000278
H l+1 =Dil_Conv f (Att(Q,K,V))=f*Att(Q,K,V)
wherein Q, K and V are respectively query (query), key (key) and value (value) matrixes, and w Q ,w K ,w V For learnable parameters, Att is the attention value calculation function, H l+1 Is output in l layers.
3. Joint loss function
Spatio-temporal features extracted from the ST-P, ST-D, ST-W modules respectively
Figure BDA0003700253750000279
Is spliced into
Figure BDA00037002537500002710
Figure BDA00037002537500002711
Inputting the time proximity characteristic, the periodic characteristic and the trend characteristic which simultaneously contain the load change into a full connection layer to calculate a final prediction result:
Figure BDA00037002537500002712
wherein W p And b p Are learnable parameters. The Mean Absolute Error (MAE) is used herein as a predicted loss function:
Figure BDA0003700253750000281
since the graph structure learning is performed iteratively in the process of spatio-temporal feature extraction, a joint loss function of graph structure learning and prediction is defined:
Figure BDA0003700253750000282
where λ is a parameter used to balance the effects of the graph structure learning module 201 and the spatio-temporal feature extraction module 202. By optimizing the loss function, the integral training of the model can be realized, and the learning of model parameters is realized while the optimal graph structure is learned.
The key points and points to be protected of the invention mainly comprise the following points:
1. according to the multi-charging-station load prediction technical route, firstly, a graph structure learning is utilized to convert a multi-charging-station load prediction problem into a graph structure data processing problem, and then a graph convolution neural network and a time sequence network are utilized to extract and predict the time-space characteristics of the load.
2. The multi-charging-station collaborative load prediction model comprises an iterative graph structure learning module 201, three space-time feature extraction modules 202 and a feature fusion and prediction module.
3. A method for iteratively mining implicit associations among multiple charging stations is utilized by a graph structure learning module 201.
4. A method for extracting the space-time characteristics of the load changes of multiple charging stations by using a space-time characteristic extraction module 202 composed of a graph convolution neural network and an expansion convolution network.
5. The graph structure designed by the present invention learns the joint loss function of the spatio-temporal feature extraction module 202.
Compared with the existing charging station load prediction method, the method has the advantages that:
1. compared with the traditional mathematical statistics method, the method has the advantages that too many external influence factors are not required to be considered, the modeling process is simple, only a small number of model hyper-parameters are required to be determined in the training process, and the prediction accuracy is higher.
2. Compared with the load prediction method based on deep learning:
i) according to the method, the implicit association among the multiple charging stations is mined by using a graph structure learning method, so that discrete charging station sites are constructed into a graph structure, the load changes of the multiple charging stations can be predicted simultaneously by further using a graph convolution neural network, and the limitation that a deep learning method can only predict the load of a single charging station is broken through;
ii) the time sequence characteristics of the load change of a single charging station are considered, the spatial characteristics of the load change of multiple charging stations are considered, and the extracted space-time characteristics are more comprehensive;
iii) when the model designed by the invention is used for predicting the load of a new charging station, only graph node information needs to be added on the basis of the current model, and a new model needs to be trained in the traditional method, so that the model in the invention has stronger expandability.
A large number of experiments are carried out on the multi-charging-station cooperative load prediction model, and the result proves that the multi-charging-station cooperative load prediction model is feasible and effective.
Example 3
A storage medium storing a program file that can implement any one of the above-described multi-charging-station cooperative load prediction methods.
Example 4
A processor is used for running a program, wherein the program executes the multi-charging-station cooperative load prediction method in any one of the above manners.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described system embodiments are merely illustrative, and for example, a division of a unit may be a logical division, and an actual implementation may have another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A multi-charging-station cooperative load prediction method is characterized by comprising the following steps:
mining implicit associations in load space-time information of the multiple charging stations and generating an explicit schema structure of the multiple charging stations;
extracting space-time characteristics in load space-time information of the multiple charging stations based on the multiple charging station graph structure, and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted space-time characteristics;
and constructing a joint loss function, and performing joint training on the construction of the multi-charging-station schema structure and the time-space feature extraction process.
2. The multi-charging-station collaborative load prediction method according to claim 1, further comprising:
and constructing a multi-charging-station graph structure after the joint training and performing multi-charging-station collaborative load prediction on the load spatio-temporal information of the multiple charging stations by using a spatio-temporal feature extraction process.
3. The multi-charging-station collaborative load prediction method according to claim 1, wherein prior to said mining of implicit associations among load spatio-temporal information of multi-charging stations and generating multi-charging-station load spatio-temporal information into a multi-charging-station schema structure, the method further comprises:
preprocessing load space-time information of a plurality of charging stations, aggregating data in each preset time period, wherein the aggregated data represent the total charging capacity of the charging stations in the time period;
dividing the data into three groups according to adjacent time periods, day time intervals and week time intervals, filling lost data by adopting linear interpolation, and then normalizing the data by utilizing a MinMax method.
4. The multi-charging-station cooperative load prediction method according to claim 3, wherein the method specifically comprises:
the three groups of data are respectively input into a graph structure learning module, the graph structure learning module calculates the similarity among charging stations based on the load historical change data of each charging station through a similarity measurement function, generates a multi-charging-station graph structure with the charging stations as nodes and the similarity as sides, and inputs the graph structure information into three groups of space-time feature extraction modules;
inputting the three groups of data and the learned graph structure information into three groups of space-time feature extraction modules, wherein the space-time feature extraction modules respectively extract space-time features by utilizing a graph convolution neural network and an expansion convolution network, then inputting the learned features into a graph structure learning module for iterative graph structure learning, performing feature fusion on the three groups of space-time features, and inputting the three groups of space-time features into a full-connection layer prediction final result;
and designing a loss function for the graph structure learning module to control the sparsity and the connectivity of the graph structure obtained by learning, designing the loss function for the time and space feature extraction module to reduce the difference between a prediction result and label data, and carrying out weighted summation on the two loss functions to construct a joint loss function so as to realize the joint training of the graph structure learning module and the time and space feature extraction module.
5. The method of claim 4, wherein three sets of data, X, of the plurality of charging stations at time intervals of a period and a day and a week are simultaneously input into the data input layer P ,X D ,X W ∈R M×N×D M is the sequence length, N is the number of charging station nodes, and D is the input dimension;
in the graph structure learning module, an implicit graph structure among the nodes of the multiple charging stations is generated according to data input, and different data are input X P ,X D ,X W Generating different structures G P ,G D ,G W
Iteratively updating the generated graph structure in the process of extracting the space-time characteristics, wherein a space-time characteristic extraction module is divided into three sub-modules ST-P, ST-D, ST-W, the sub-modules have the same structure, and input data are X respectively P ,X D ,X W Respectively extracting time proximity characteristic, period characteristic and trend characteristic of load change;
and finally, fusing three different space-time characteristics, and outputting a prediction result by using a full-connection layer.
6. The multi-charging-station cooperative load prediction method according to claim 5, wherein a graph similarity metric learning method is adopted in the graph structure learning module for learning hidden relations among charging station nodes, and cosine similarity is designed as a metric function:
s ij =cos(w⊙x i ,w⊙x j )
wherein [ ] denotes the Hadamard product, w denotes the learnable parameter, x i ,x j Inputting data; expanding cosine similarity to a multi-head version by using m weight vectors, respectively calculating independent similarity matrixes and taking an average value as a final similarity:
Figure FDA0003700253740000031
Figure FDA0003700253740000032
wherein
Figure FDA0003700253740000033
Computing an input vector x i And x j Each similarity considered as part of the vector semantic features;
given the symmetric weight adjacency matrix A of the undirected graph, Dirichlet energy is used to measure the graph signal
Figure FDA0003700253740000037
Smoothness of (2):
Figure FDA0003700253740000035
where tr (·) represents the trace of the matrix, L is the graph laplace matrix, and D ═ Σ j A ij Is a degree matrix; neighboring nodes have similar characteristics by minimizing Ω (a, X); adding sparse constraints to the adjacency matrix:
Figure FDA0003700253740000036
wherein | · | purple F Is Frobenius norm, beta and gamma are non-negative hyperparameters; the first term loss in the above equation is used for punishing the formation of the unconnected graph, and the second term loss is used for punishing the node degree to control the sparsity of the graph; the smoothness loss and the sparse constraint loss are added to obtain the overall regularization loss of the graph:
L g =αΩ(A,X)+f(A)
alpha is a non-negative hyperparameter; overall regularization loses smoothness, connectivity, and sparsity of the control map.
7. The multi-charging-station cooperative load prediction method according to claim 6, wherein in the space-time feature extraction module, each ST module comprises two ST sub-blocks, each ST sub-block comprises two layers of gated expansion convolution and one layer of gated graph convolution, the gated expansion convolution network is used for extracting the time features of the load sequence, and the gated graph convolution is used for extracting the spatial features of the multi-charging-stations; the gated graph convolution layer is a bridge connecting an upper expansion convolution layer and a lower expansion convolution layer, and the rapid propagation of a space state on the graph convolution layer is realized after expansion convolution; g is graph structure information obtained by learning, each ST sub-block is used as graph convolution prior, and F is space-time characteristics obtained by extraction and is transmitted back to the IDGL graph structure learning module for iterative learning of the graph structure information;
the inflation convolution jumps a certain distance at each step, given a one-dimensional sequence input X ∈ R T And a filter f ∈ R K The dilation convolution at time t is expressed as:
Figure FDA0003700253740000041
where d is the expansion factor, determining the jump distance for each convolution;
gated expansion convolution is adopted in the ST sub-block to extract the time sequence dynamic characteristics of load change:
H'=Dil_Conv f (X l )=f*H l
wherein
Figure FDA0003700253740000042
For the l-layer input, Dil _ Conv is the dilation convolution operation,
Figure FDA0003700253740000043
in the form of a convolution kernel, the kernel is,
Figure FDA0003700253740000044
for output, N is the number of charging station nodes, M is the length of the load change sequence, K is the size of the convolution kernel, C i ,C o The number of input channels and the number of output channels are respectively; h' was split evenly, using gating Units (GLU, Gated Linear Units) to increase the non-linearity:
(H' 1 ,H' 2 )=split(H')
Figure FDA00037002537400000412
wherein split means the splitting operation, where split means,
Figure FDA0003700253740000045
in order to be the gate-controlled input,
Figure FDA0003700253740000046
for output, tanh and sigmoid are activation functions; at the gated graph convolution layer, a first order approximation GCN is embedded into the temporal gating cell:
Figure FDA0003700253740000047
(H' 1 ,H' 2 )=split(θ*gH l )
Figure FDA0003700253740000048
where g is the graph convolution operation,
Figure FDA0003700253740000049
is a parameter that can be learned by the user,
Figure FDA00037002537400000410
is a priori information derived from the graph structure information G:
Figure FDA00037002537400000411
Figure FDA0003700253740000051
A∈R N×N for the adjacency matrix of graph G, D ∈ R N×N Is a degree matrix; in the gated attention expansion convolution, a self-attention mechanism was introduced:
Figure FDA0003700253740000052
Figure FDA0003700253740000053
H l+1 =Dil_Conv f (Att(Q,K,V))=f*Att(Q,K,V)
wherein Q, K and V are respectively query (query), key (key) and value (value) matrixes, and w Q ,w K ,w V For learnable parameters, Att is the attention value calculation function, H l+1 Is output in l layers.
8. The multi-charging-station cooperative load prediction method as claimed in claim 7, wherein the space-time characteristics extracted from the ST-P, ST-D, ST-W modules are respectively extracted
Figure FDA0003700253740000054
Is spliced into
Figure FDA0003700253740000055
Inputting the time proximity characteristic, the periodic characteristic and the trend characteristic which simultaneously contain the load change into a full connection layer to calculate a final prediction result:
Figure FDA0003700253740000056
wherein W p And b p Is a learnable parameter; the mean absolute error MAE is used as the predicted loss function:
Figure FDA0003700253740000057
defining a joint loss function of graph structure learning and prediction:
Figure FDA0003700253740000058
wherein λ is a parameter used to balance the influence of the graph structure learning module and the spatio-temporal feature extraction module.
9. A multi-charging-station cooperative load prediction apparatus, comprising:
the graph structure learning module is used for mining implicit associations among the load space-time information of the multiple charging stations and generating the load space-time information of the multiple charging stations into a multiple charging station graph structure;
the time-space feature extraction module is used for extracting load time-space information of the multiple charging stations and time-space features in the multiple charging station graph structure and performing iterative graph structure learning on the multiple charging station graph structure by using the extracted time-space features;
and the joint loss function construction module is used for constructing a joint loss function and performing joint training on the construction of the multi-charging-station schema structure and the air-space feature extraction process.
10. The multi-charging-station cooperative load prediction apparatus according to claim 9, further comprising:
and the multi-charging-station collaborative load prediction module is used for constructing a multi-charging-station schema structure and performing multi-charging-station collaborative load prediction on the load spatio-temporal information of the multi-charging stations by using the spatio-temporal feature extraction process after the joint training.
CN202210687630.5A 2022-06-17 2022-06-17 Multi-charging-station cooperative load prediction method and device Pending CN115080795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210687630.5A CN115080795A (en) 2022-06-17 2022-06-17 Multi-charging-station cooperative load prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210687630.5A CN115080795A (en) 2022-06-17 2022-06-17 Multi-charging-station cooperative load prediction method and device

Publications (1)

Publication Number Publication Date
CN115080795A true CN115080795A (en) 2022-09-20

Family

ID=83253660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210687630.5A Pending CN115080795A (en) 2022-06-17 2022-06-17 Multi-charging-station cooperative load prediction method and device

Country Status (1)

Country Link
CN (1) CN115080795A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150326A (en) * 2023-10-31 2023-12-01 深圳市大数据研究院 New energy node output power prediction method, device, equipment and storage medium
CN117993963A (en) * 2024-04-03 2024-05-07 三峡电能有限公司 Space-time configuration-based multi-charging-pile station electricity consumption prediction method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150326A (en) * 2023-10-31 2023-12-01 深圳市大数据研究院 New energy node output power prediction method, device, equipment and storage medium
CN117150326B (en) * 2023-10-31 2024-02-23 深圳市大数据研究院 New energy node output power prediction method, device, equipment and storage medium
CN117993963A (en) * 2024-04-03 2024-05-07 三峡电能有限公司 Space-time configuration-based multi-charging-pile station electricity consumption prediction method

Similar Documents

Publication Publication Date Title
CN109887282B (en) Road network traffic flow prediction method based on hierarchical timing diagram convolutional network
Bui et al. Spatial-temporal graph neural network for traffic forecasting: An overview and open research issues
CN112216108B (en) Traffic prediction method based on attribute-enhanced space-time graph convolution model
CN113053115B (en) Traffic prediction method based on multi-scale graph convolution network model
CN111612243B (en) Traffic speed prediction method, system and storage medium
CN111860951A (en) Rail transit passenger flow prediction method based on dynamic hypergraph convolutional network
Zhang et al. A Traffic Prediction Method of Bicycle-sharing based on Long and Short term Memory Network.
CN113762595B (en) Traffic time prediction model training method, traffic time prediction method and equipment
Jun et al. Modeling a combined forecast algorithm based on sequence patterns and near characteristics: An application for tourism demand forecasting
Yang et al. Hybrid prediction method for wind speed combining ensemble empirical mode decomposition and Bayesian ridge regression
CN111242292B (en) OD data prediction method and system based on deep space-time network
CN115080795A (en) Multi-charging-station cooperative load prediction method and device
CN111242395B (en) Method and device for constructing prediction model for OD (origin-destination) data
CN113326919A (en) Traffic travel mode selection prediction method based on computational graph
CN116227180A (en) Data-driven-based intelligent decision-making method for unit combination
CN115423162A (en) Traffic flow prediction method and device, electronic equipment and storage medium
CN115063972A (en) Traffic speed prediction method and system based on graph convolution and gate control cyclic unit
Li et al. Traffic message channel prediction based on graph convolutional network
CN117474522A (en) Power grid substation equipment operation and detection auxiliary decision-making method based on natural language reasoning
Xu et al. Multi‐Dimensional Attention Based Spatial‐Temporal Networks for Traffic Forecasting
CN114338416A (en) Space-time multi-index prediction method and device and storage medium
Bhaumik et al. STLGRU: Spatio-temporal lightweight graph GRU for traffic flow prediction
CN116975686A (en) Method for training student model, behavior prediction method and device
CN116484912A (en) Model construction method, device, equipment and storage medium
Feng et al. AGCN‐T: A Traffic Flow Prediction Model for Spatial‐Temporal Network Dynamics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination